00:00:00.001 Started by upstream project "autotest-per-patch" build number 120977 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.114 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.177 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.177 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.671 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.683 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.695 Checking out Revision c38b93df03fb3fd90b2ba7d165084f104cec2d9b (FETCH_HEAD) 00:00:05.695 > git config core.sparsecheckout # timeout=10 00:00:05.758 > git read-tree -mu HEAD # timeout=10 00:00:05.775 > git checkout -f c38b93df03fb3fd90b2ba7d165084f104cec2d9b # timeout=5 00:00:05.793 Commit message: "packer: Bump BESClient to latest available version" 00:00:05.793 > git rev-list --no-walk c38b93df03fb3fd90b2ba7d165084f104cec2d9b # timeout=10 00:00:05.867 [Pipeline] Start of Pipeline 00:00:05.881 [Pipeline] library 00:00:05.882 Loading library shm_lib@master 00:00:05.883 Library shm_lib@master is cached. Copying from home. 00:00:05.897 [Pipeline] node 00:00:05.905 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.907 [Pipeline] { 00:00:05.914 [Pipeline] catchError 00:00:05.915 [Pipeline] { 00:00:05.927 [Pipeline] wrap 00:00:05.937 [Pipeline] { 00:00:05.942 [Pipeline] stage 00:00:05.943 [Pipeline] { (Prologue) 00:00:06.133 [Pipeline] sh 00:00:06.413 + logger -p user.info -t JENKINS-CI 00:00:06.431 [Pipeline] echo 00:00:06.432 Node: GP6 00:00:06.439 [Pipeline] sh 00:00:06.739 [Pipeline] setCustomBuildProperty 00:00:06.750 [Pipeline] echo 00:00:06.751 Cleanup processes 00:00:06.756 [Pipeline] sh 00:00:07.041 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.041 3195373 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.055 [Pipeline] sh 00:00:07.340 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.341 ++ grep -v 'sudo pgrep' 00:00:07.341 ++ awk '{print $1}' 00:00:07.341 + sudo kill -9 00:00:07.341 + true 00:00:07.358 [Pipeline] cleanWs 00:00:07.369 [WS-CLEANUP] Deleting project workspace... 00:00:07.369 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.377 [WS-CLEANUP] done 00:00:07.382 [Pipeline] setCustomBuildProperty 00:00:07.398 [Pipeline] sh 00:00:07.685 + sudo git config --global --replace-all safe.directory '*' 00:00:07.762 [Pipeline] nodesByLabel 00:00:07.763 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.771 [Pipeline] httpRequest 00:00:07.775 HttpMethod: GET 00:00:07.775 URL: http://10.211.164.96/packages/jbp_c38b93df03fb3fd90b2ba7d165084f104cec2d9b.tar.gz 00:00:07.780 Sending request to url: http://10.211.164.96/packages/jbp_c38b93df03fb3fd90b2ba7d165084f104cec2d9b.tar.gz 00:00:07.798 Response Code: HTTP/1.1 200 OK 00:00:07.799 Success: Status code 200 is in the accepted range: 200,404 00:00:07.799 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c38b93df03fb3fd90b2ba7d165084f104cec2d9b.tar.gz 00:00:11.614 [Pipeline] sh 00:00:11.906 + tar --no-same-owner -xf jbp_c38b93df03fb3fd90b2ba7d165084f104cec2d9b.tar.gz 00:00:11.925 [Pipeline] httpRequest 00:00:11.929 HttpMethod: GET 00:00:11.929 URL: http://10.211.164.96/packages/spdk_77aac3af83c1d19836048c8eb7cdd65e34512cc3.tar.gz 00:00:11.930 Sending request to url: http://10.211.164.96/packages/spdk_77aac3af83c1d19836048c8eb7cdd65e34512cc3.tar.gz 00:00:11.948 Response Code: HTTP/1.1 200 OK 00:00:11.949 Success: Status code 200 is in the accepted range: 200,404 00:00:11.949 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_77aac3af83c1d19836048c8eb7cdd65e34512cc3.tar.gz 00:01:13.579 [Pipeline] sh 00:01:13.862 + tar --no-same-owner -xf spdk_77aac3af83c1d19836048c8eb7cdd65e34512cc3.tar.gz 00:01:16.407 [Pipeline] sh 00:01:16.692 + git -C spdk log --oneline -n5 00:01:16.692 77aac3af8 nvme/fio_plugin: trim add support for multiple ranges 00:01:16.692 40b97f076 nvme/fio_plugin: add trim support 00:01:16.692 3f2c89791 event: switch reactors to poll mode before stopping 00:01:16.692 443e1ea31 setup.sh: emit command line to /dev/kmsg on Linux 00:01:16.692 a1264177c pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:01:16.705 [Pipeline] } 00:01:16.720 [Pipeline] // stage 00:01:16.728 [Pipeline] stage 00:01:16.730 [Pipeline] { (Prepare) 00:01:16.748 [Pipeline] writeFile 00:01:16.765 [Pipeline] sh 00:01:17.050 + logger -p user.info -t JENKINS-CI 00:01:17.064 [Pipeline] sh 00:01:17.347 + logger -p user.info -t JENKINS-CI 00:01:17.361 [Pipeline] sh 00:01:17.646 + cat autorun-spdk.conf 00:01:17.646 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.646 SPDK_TEST_NVMF=1 00:01:17.646 SPDK_TEST_NVME_CLI=1 00:01:17.646 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.646 SPDK_TEST_NVMF_NICS=e810 00:01:17.646 SPDK_TEST_VFIOUSER=1 00:01:17.646 SPDK_RUN_UBSAN=1 00:01:17.646 NET_TYPE=phy 00:01:17.655 RUN_NIGHTLY=0 00:01:17.660 [Pipeline] readFile 00:01:17.683 [Pipeline] withEnv 00:01:17.685 [Pipeline] { 00:01:17.698 [Pipeline] sh 00:01:17.985 + set -ex 00:01:17.986 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.986 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.986 ++ SPDK_TEST_NVMF=1 00:01:17.986 ++ SPDK_TEST_NVME_CLI=1 00:01:17.986 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.986 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.986 ++ SPDK_TEST_VFIOUSER=1 00:01:17.986 ++ SPDK_RUN_UBSAN=1 00:01:17.986 ++ NET_TYPE=phy 00:01:17.986 ++ RUN_NIGHTLY=0 00:01:17.986 + case $SPDK_TEST_NVMF_NICS in 00:01:17.986 + DRIVERS=ice 00:01:17.986 + [[ tcp == \r\d\m\a ]] 00:01:17.986 + [[ -n ice ]] 00:01:17.986 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.986 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.986 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.986 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.986 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.986 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.986 + true 00:01:17.986 + for D in $DRIVERS 00:01:17.986 + sudo modprobe ice 00:01:17.986 + exit 0 00:01:17.996 [Pipeline] } 00:01:18.015 [Pipeline] // withEnv 00:01:18.021 [Pipeline] } 00:01:18.039 [Pipeline] // stage 00:01:18.051 [Pipeline] catchError 00:01:18.053 [Pipeline] { 00:01:18.070 [Pipeline] timeout 00:01:18.071 Timeout set to expire in 40 min 00:01:18.072 [Pipeline] { 00:01:18.088 [Pipeline] stage 00:01:18.090 [Pipeline] { (Tests) 00:01:18.106 [Pipeline] sh 00:01:18.393 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.393 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.393 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.393 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:18.393 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.393 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.393 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:18.393 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.393 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.393 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.393 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.393 + source /etc/os-release 00:01:18.393 ++ NAME='Fedora Linux' 00:01:18.393 ++ VERSION='38 (Cloud Edition)' 00:01:18.393 ++ ID=fedora 00:01:18.393 ++ VERSION_ID=38 00:01:18.393 ++ VERSION_CODENAME= 00:01:18.393 ++ PLATFORM_ID=platform:f38 00:01:18.393 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.393 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.393 ++ LOGO=fedora-logo-icon 00:01:18.393 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.393 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.393 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.393 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.393 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.393 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.393 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.393 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.393 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.393 ++ SUPPORT_END=2024-05-14 00:01:18.393 ++ VARIANT='Cloud Edition' 00:01:18.393 ++ VARIANT_ID=cloud 00:01:18.393 + uname -a 00:01:18.393 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.393 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.335 Hugepages 00:01:19.336 node hugesize free / total 00:01:19.336 node0 1048576kB 0 / 0 00:01:19.336 node0 2048kB 0 / 0 00:01:19.336 node1 1048576kB 0 / 0 00:01:19.336 node1 2048kB 0 / 0 00:01:19.336 00:01:19.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.336 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:19.336 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:19.336 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:19.336 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:19.336 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:19.336 + rm -f /tmp/spdk-ld-path 00:01:19.336 + source autorun-spdk.conf 00:01:19.336 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.336 ++ SPDK_TEST_NVMF=1 00:01:19.336 ++ SPDK_TEST_NVME_CLI=1 00:01:19.336 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.336 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.336 ++ SPDK_TEST_VFIOUSER=1 00:01:19.336 ++ SPDK_RUN_UBSAN=1 00:01:19.336 ++ NET_TYPE=phy 00:01:19.336 ++ RUN_NIGHTLY=0 00:01:19.336 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.336 + [[ -n '' ]] 00:01:19.336 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.596 + for M in /var/spdk/build-*-manifest.txt 00:01:19.596 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.596 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.596 + for M in /var/spdk/build-*-manifest.txt 00:01:19.596 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.596 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.596 ++ uname 00:01:19.596 + [[ Linux == \L\i\n\u\x ]] 00:01:19.596 + sudo dmesg -T 00:01:19.596 + sudo dmesg --clear 00:01:19.596 + dmesg_pid=3196660 00:01:19.596 + [[ Fedora Linux == FreeBSD ]] 00:01:19.596 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.596 + sudo dmesg -Tw 00:01:19.596 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.596 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.596 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.596 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.596 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.596 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.596 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.596 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.596 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.596 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.596 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.596 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.596 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.596 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.596 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.596 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.596 Test configuration: 00:01:19.596 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.596 SPDK_TEST_NVMF=1 00:01:19.596 SPDK_TEST_NVME_CLI=1 00:01:19.596 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.596 SPDK_TEST_NVMF_NICS=e810 00:01:19.596 SPDK_TEST_VFIOUSER=1 00:01:19.596 SPDK_RUN_UBSAN=1 00:01:19.596 NET_TYPE=phy 00:01:19.596 RUN_NIGHTLY=0 15:57:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.596 15:57:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.596 15:57:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.596 15:57:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.596 15:57:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.596 15:57:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.596 15:57:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.596 15:57:20 -- paths/export.sh@5 -- $ export PATH 00:01:19.596 15:57:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.596 15:57:20 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.596 15:57:20 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:19.596 15:57:20 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713967040.XXXXXX 00:01:19.596 15:57:20 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713967040.yKCo47 00:01:19.596 15:57:20 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:19.596 15:57:20 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:19.596 15:57:20 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.596 15:57:20 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.596 15:57:20 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.596 15:57:20 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:19.596 15:57:20 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:19.596 15:57:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.596 15:57:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:19.596 15:57:20 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:19.596 15:57:20 -- pm/common@17 -- $ local monitor 00:01:19.596 15:57:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.596 15:57:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3196694 00:01:19.596 15:57:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.596 15:57:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3196696 00:01:19.596 15:57:20 -- pm/common@21 -- $ date +%s 00:01:19.596 15:57:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.596 15:57:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3196698 00:01:19.596 15:57:20 -- pm/common@21 -- $ date +%s 00:01:19.596 15:57:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.596 15:57:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3196701 00:01:19.596 15:57:20 -- pm/common@21 -- $ date +%s 00:01:19.596 15:57:20 -- pm/common@26 -- $ sleep 1 00:01:19.596 15:57:20 -- pm/common@21 -- $ date +%s 00:01:19.596 15:57:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713967040 00:01:19.596 15:57:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713967040 00:01:19.596 15:57:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713967040 00:01:19.596 15:57:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713967040 00:01:19.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713967040_collect-vmstat.pm.log 00:01:19.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713967040_collect-bmc-pm.bmc.pm.log 00:01:19.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713967040_collect-cpu-load.pm.log 00:01:19.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713967040_collect-cpu-temp.pm.log 00:01:20.541 15:57:21 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:20.541 15:57:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.541 15:57:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.541 15:57:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.541 15:57:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.541 Wed Apr 24 01:57:21 PM UTC 2024 00:01:20.541 15:57:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.541 v24.05-pre-439-g77aac3af8 00:01:20.541 15:57:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.541 15:57:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.541 15:57:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.541 15:57:21 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:20.541 15:57:21 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:20.541 15:57:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.800 ************************************ 00:01:20.800 START TEST ubsan 00:01:20.800 ************************************ 00:01:20.800 15:57:21 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:20.800 using ubsan 00:01:20.800 00:01:20.800 real 0m0.000s 00:01:20.800 user 0m0.000s 00:01:20.800 sys 0m0.000s 00:01:20.800 15:57:21 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:20.800 15:57:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.800 ************************************ 00:01:20.800 END TEST ubsan 00:01:20.800 ************************************ 00:01:20.800 15:57:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.800 15:57:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.800 15:57:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.800 15:57:21 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.800 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.800 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.060 Using 'verbs' RDMA provider 00:01:31.668 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.653 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.653 Creating mk/config.mk...done. 00:01:41.653 Creating mk/cc.flags.mk...done. 00:01:41.653 Type 'make' to build. 00:01:41.653 15:57:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:41.653 15:57:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:41.653 15:57:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:41.653 15:57:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.653 ************************************ 00:01:41.653 START TEST make 00:01:41.653 ************************************ 00:01:41.653 15:57:42 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:41.653 make[1]: Nothing to be done for 'all'. 00:01:43.052 The Meson build system 00:01:43.052 Version: 1.3.1 00:01:43.052 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.052 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.052 Build type: native build 00:01:43.052 Project name: libvfio-user 00:01:43.052 Project version: 0.0.1 00:01:43.052 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.052 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.052 Host machine cpu family: x86_64 00:01:43.052 Host machine cpu: x86_64 00:01:43.052 Run-time dependency threads found: YES 00:01:43.052 Library dl found: YES 00:01:43.052 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.052 Run-time dependency json-c found: YES 0.17 00:01:43.052 Run-time dependency cmocka found: YES 1.1.7 00:01:43.052 Program pytest-3 found: NO 00:01:43.052 Program flake8 found: NO 00:01:43.052 Program misspell-fixer found: NO 00:01:43.052 Program restructuredtext-lint found: NO 00:01:43.052 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.052 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.052 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.052 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.052 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.052 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.052 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.052 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.052 Build targets in project: 8 00:01:43.052 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.052 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.052 00:01:43.052 libvfio-user 0.0.1 00:01:43.052 00:01:43.052 User defined options 00:01:43.052 buildtype : debug 00:01:43.052 default_library: shared 00:01:43.052 libdir : /usr/local/lib 00:01:43.052 00:01:43.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.007 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.007 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.007 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.007 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.007 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.007 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.007 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.272 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.272 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.272 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.272 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.272 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.272 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.272 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.272 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.272 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.272 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.272 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.272 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.272 [19/37] Compiling C object samples/client.p/client.c.o 00:01:44.272 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.272 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.272 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.272 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.272 [24/37] Compiling C object samples/server.p/server.c.o 00:01:44.272 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.272 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.272 [27/37] Linking target samples/client 00:01:44.536 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.536 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.536 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.536 [31/37] Linking target test/unit_tests 00:01:44.797 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.797 [33/37] Linking target samples/server 00:01:44.797 [34/37] Linking target samples/null 00:01:44.797 [35/37] Linking target samples/lspci 00:01:44.797 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:44.797 [37/37] Linking target samples/gpio-pci-idio-16 00:01:44.797 INFO: autodetecting backend as ninja 00:01:44.797 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.797 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.742 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.742 ninja: no work to do. 00:01:51.105 The Meson build system 00:01:51.105 Version: 1.3.1 00:01:51.105 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.105 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.105 Build type: native build 00:01:51.105 Program cat found: YES (/usr/bin/cat) 00:01:51.105 Project name: DPDK 00:01:51.105 Project version: 23.11.0 00:01:51.105 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.105 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.105 Host machine cpu family: x86_64 00:01:51.105 Host machine cpu: x86_64 00:01:51.105 Message: ## Building in Developer Mode ## 00:01:51.105 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.105 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.105 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.105 Program python3 found: YES (/usr/bin/python3) 00:01:51.105 Program cat found: YES (/usr/bin/cat) 00:01:51.105 Compiler for C supports arguments -march=native: YES 00:01:51.105 Checking for size of "void *" : 8 00:01:51.105 Checking for size of "void *" : 8 (cached) 00:01:51.105 Library m found: YES 00:01:51.105 Library numa found: YES 00:01:51.105 Has header "numaif.h" : YES 00:01:51.105 Library fdt found: NO 00:01:51.105 Library execinfo found: NO 00:01:51.105 Has header "execinfo.h" : YES 00:01:51.105 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.105 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.105 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.105 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.105 Run-time dependency openssl found: YES 3.0.9 00:01:51.105 Run-time dependency libpcap found: YES 1.10.4 00:01:51.105 Has header "pcap.h" with dependency libpcap: YES 00:01:51.105 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.105 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.105 Compiler for C supports arguments -Wformat: YES 00:01:51.105 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.105 Compiler for C supports arguments -Wformat-security: NO 00:01:51.105 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.105 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.105 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.105 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.105 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.105 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.105 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.105 Compiler for C supports arguments -Wundef: YES 00:01:51.105 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.105 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.105 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.105 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.105 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.105 Program objdump found: YES (/usr/bin/objdump) 00:01:51.105 Compiler for C supports arguments -mavx512f: YES 00:01:51.105 Checking if "AVX512 checking" compiles: YES 00:01:51.105 Fetching value of define "__SSE4_2__" : 1 00:01:51.105 Fetching value of define "__AES__" : 1 00:01:51.105 Fetching value of define "__AVX__" : 1 00:01:51.105 Fetching value of define "__AVX2__" : (undefined) 00:01:51.105 Fetching value of define "__AVX512BW__" : (undefined) 00:01:51.105 Fetching value of define "__AVX512CD__" : (undefined) 00:01:51.105 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:51.105 Fetching value of define "__AVX512F__" : (undefined) 00:01:51.105 Fetching value of define "__AVX512VL__" : (undefined) 00:01:51.105 Fetching value of define "__PCLMUL__" : 1 00:01:51.105 Fetching value of define "__RDRND__" : 1 00:01:51.105 Fetching value of define "__RDSEED__" : (undefined) 00:01:51.105 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.105 Fetching value of define "__znver1__" : (undefined) 00:01:51.105 Fetching value of define "__znver2__" : (undefined) 00:01:51.105 Fetching value of define "__znver3__" : (undefined) 00:01:51.105 Fetching value of define "__znver4__" : (undefined) 00:01:51.105 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.105 Message: lib/log: Defining dependency "log" 00:01:51.105 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.105 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.105 Checking for function "getentropy" : NO 00:01:51.105 Message: lib/eal: Defining dependency "eal" 00:01:51.105 Message: lib/ring: Defining dependency "ring" 00:01:51.105 Message: lib/rcu: Defining dependency "rcu" 00:01:51.105 Message: lib/mempool: Defining dependency "mempool" 00:01:51.105 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.105 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.105 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.105 Compiler for C supports arguments -mpclmul: YES 00:01:51.105 Compiler for C supports arguments -maes: YES 00:01:51.105 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.105 Compiler for C supports arguments -mavx512bw: YES 00:01:51.105 Compiler for C supports arguments -mavx512dq: YES 00:01:51.105 Compiler for C supports arguments -mavx512vl: YES 00:01:51.105 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.105 Compiler for C supports arguments -mavx2: YES 00:01:51.105 Compiler for C supports arguments -mavx: YES 00:01:51.105 Message: lib/net: Defining dependency "net" 00:01:51.105 Message: lib/meter: Defining dependency "meter" 00:01:51.105 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.105 Message: lib/pci: Defining dependency "pci" 00:01:51.105 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.105 Message: lib/hash: Defining dependency "hash" 00:01:51.105 Message: lib/timer: Defining dependency "timer" 00:01:51.105 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.105 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.105 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.105 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.105 Message: lib/power: Defining dependency "power" 00:01:51.105 Message: lib/reorder: Defining dependency "reorder" 00:01:51.105 Message: lib/security: Defining dependency "security" 00:01:51.105 Has header "linux/userfaultfd.h" : YES 00:01:51.105 Has header "linux/vduse.h" : YES 00:01:51.105 Message: lib/vhost: Defining dependency "vhost" 00:01:51.105 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.105 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.105 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.105 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.105 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.105 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.105 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.105 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.105 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.105 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.105 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.105 Configuring doxy-api-html.conf using configuration 00:01:51.105 Configuring doxy-api-man.conf using configuration 00:01:51.105 Program mandb found: YES (/usr/bin/mandb) 00:01:51.106 Program sphinx-build found: NO 00:01:51.106 Configuring rte_build_config.h using configuration 00:01:51.106 Message: 00:01:51.106 ================= 00:01:51.106 Applications Enabled 00:01:51.106 ================= 00:01:51.106 00:01:51.106 apps: 00:01:51.106 00:01:51.106 00:01:51.106 Message: 00:01:51.106 ================= 00:01:51.106 Libraries Enabled 00:01:51.106 ================= 00:01:51.106 00:01:51.106 libs: 00:01:51.106 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.106 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.106 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.106 00:01:51.106 Message: 00:01:51.106 =============== 00:01:51.106 Drivers Enabled 00:01:51.106 =============== 00:01:51.106 00:01:51.106 common: 00:01:51.106 00:01:51.106 bus: 00:01:51.106 pci, vdev, 00:01:51.106 mempool: 00:01:51.106 ring, 00:01:51.106 dma: 00:01:51.106 00:01:51.106 net: 00:01:51.106 00:01:51.106 crypto: 00:01:51.106 00:01:51.106 compress: 00:01:51.106 00:01:51.106 vdpa: 00:01:51.106 00:01:51.106 00:01:51.106 Message: 00:01:51.106 ================= 00:01:51.106 Content Skipped 00:01:51.106 ================= 00:01:51.106 00:01:51.106 apps: 00:01:51.106 dumpcap: explicitly disabled via build config 00:01:51.106 graph: explicitly disabled via build config 00:01:51.106 pdump: explicitly disabled via build config 00:01:51.106 proc-info: explicitly disabled via build config 00:01:51.106 test-acl: explicitly disabled via build config 00:01:51.106 test-bbdev: explicitly disabled via build config 00:01:51.106 test-cmdline: explicitly disabled via build config 00:01:51.106 test-compress-perf: explicitly disabled via build config 00:01:51.106 test-crypto-perf: explicitly disabled via build config 00:01:51.106 test-dma-perf: explicitly disabled via build config 00:01:51.106 test-eventdev: explicitly disabled via build config 00:01:51.106 test-fib: explicitly disabled via build config 00:01:51.106 test-flow-perf: explicitly disabled via build config 00:01:51.106 test-gpudev: explicitly disabled via build config 00:01:51.106 test-mldev: explicitly disabled via build config 00:01:51.106 test-pipeline: explicitly disabled via build config 00:01:51.106 test-pmd: explicitly disabled via build config 00:01:51.106 test-regex: explicitly disabled via build config 00:01:51.106 test-sad: explicitly disabled via build config 00:01:51.106 test-security-perf: explicitly disabled via build config 00:01:51.106 00:01:51.106 libs: 00:01:51.106 metrics: explicitly disabled via build config 00:01:51.106 acl: explicitly disabled via build config 00:01:51.106 bbdev: explicitly disabled via build config 00:01:51.106 bitratestats: explicitly disabled via build config 00:01:51.106 bpf: explicitly disabled via build config 00:01:51.106 cfgfile: explicitly disabled via build config 00:01:51.106 distributor: explicitly disabled via build config 00:01:51.106 efd: explicitly disabled via build config 00:01:51.106 eventdev: explicitly disabled via build config 00:01:51.106 dispatcher: explicitly disabled via build config 00:01:51.106 gpudev: explicitly disabled via build config 00:01:51.106 gro: explicitly disabled via build config 00:01:51.106 gso: explicitly disabled via build config 00:01:51.106 ip_frag: explicitly disabled via build config 00:01:51.106 jobstats: explicitly disabled via build config 00:01:51.106 latencystats: explicitly disabled via build config 00:01:51.106 lpm: explicitly disabled via build config 00:01:51.106 member: explicitly disabled via build config 00:01:51.106 pcapng: explicitly disabled via build config 00:01:51.106 rawdev: explicitly disabled via build config 00:01:51.106 regexdev: explicitly disabled via build config 00:01:51.106 mldev: explicitly disabled via build config 00:01:51.106 rib: explicitly disabled via build config 00:01:51.106 sched: explicitly disabled via build config 00:01:51.106 stack: explicitly disabled via build config 00:01:51.106 ipsec: explicitly disabled via build config 00:01:51.106 pdcp: explicitly disabled via build config 00:01:51.106 fib: explicitly disabled via build config 00:01:51.106 port: explicitly disabled via build config 00:01:51.106 pdump: explicitly disabled via build config 00:01:51.106 table: explicitly disabled via build config 00:01:51.106 pipeline: explicitly disabled via build config 00:01:51.106 graph: explicitly disabled via build config 00:01:51.106 node: explicitly disabled via build config 00:01:51.106 00:01:51.106 drivers: 00:01:51.106 common/cpt: not in enabled drivers build config 00:01:51.106 common/dpaax: not in enabled drivers build config 00:01:51.106 common/iavf: not in enabled drivers build config 00:01:51.106 common/idpf: not in enabled drivers build config 00:01:51.106 common/mvep: not in enabled drivers build config 00:01:51.106 common/octeontx: not in enabled drivers build config 00:01:51.106 bus/auxiliary: not in enabled drivers build config 00:01:51.106 bus/cdx: not in enabled drivers build config 00:01:51.106 bus/dpaa: not in enabled drivers build config 00:01:51.106 bus/fslmc: not in enabled drivers build config 00:01:51.106 bus/ifpga: not in enabled drivers build config 00:01:51.106 bus/platform: not in enabled drivers build config 00:01:51.106 bus/vmbus: not in enabled drivers build config 00:01:51.106 common/cnxk: not in enabled drivers build config 00:01:51.106 common/mlx5: not in enabled drivers build config 00:01:51.106 common/nfp: not in enabled drivers build config 00:01:51.106 common/qat: not in enabled drivers build config 00:01:51.106 common/sfc_efx: not in enabled drivers build config 00:01:51.106 mempool/bucket: not in enabled drivers build config 00:01:51.106 mempool/cnxk: not in enabled drivers build config 00:01:51.106 mempool/dpaa: not in enabled drivers build config 00:01:51.106 mempool/dpaa2: not in enabled drivers build config 00:01:51.106 mempool/octeontx: not in enabled drivers build config 00:01:51.106 mempool/stack: not in enabled drivers build config 00:01:51.106 dma/cnxk: not in enabled drivers build config 00:01:51.106 dma/dpaa: not in enabled drivers build config 00:01:51.106 dma/dpaa2: not in enabled drivers build config 00:01:51.106 dma/hisilicon: not in enabled drivers build config 00:01:51.106 dma/idxd: not in enabled drivers build config 00:01:51.106 dma/ioat: not in enabled drivers build config 00:01:51.106 dma/skeleton: not in enabled drivers build config 00:01:51.106 net/af_packet: not in enabled drivers build config 00:01:51.106 net/af_xdp: not in enabled drivers build config 00:01:51.106 net/ark: not in enabled drivers build config 00:01:51.106 net/atlantic: not in enabled drivers build config 00:01:51.106 net/avp: not in enabled drivers build config 00:01:51.106 net/axgbe: not in enabled drivers build config 00:01:51.106 net/bnx2x: not in enabled drivers build config 00:01:51.106 net/bnxt: not in enabled drivers build config 00:01:51.106 net/bonding: not in enabled drivers build config 00:01:51.106 net/cnxk: not in enabled drivers build config 00:01:51.106 net/cpfl: not in enabled drivers build config 00:01:51.106 net/cxgbe: not in enabled drivers build config 00:01:51.106 net/dpaa: not in enabled drivers build config 00:01:51.106 net/dpaa2: not in enabled drivers build config 00:01:51.106 net/e1000: not in enabled drivers build config 00:01:51.106 net/ena: not in enabled drivers build config 00:01:51.106 net/enetc: not in enabled drivers build config 00:01:51.106 net/enetfec: not in enabled drivers build config 00:01:51.106 net/enic: not in enabled drivers build config 00:01:51.106 net/failsafe: not in enabled drivers build config 00:01:51.106 net/fm10k: not in enabled drivers build config 00:01:51.106 net/gve: not in enabled drivers build config 00:01:51.106 net/hinic: not in enabled drivers build config 00:01:51.106 net/hns3: not in enabled drivers build config 00:01:51.106 net/i40e: not in enabled drivers build config 00:01:51.106 net/iavf: not in enabled drivers build config 00:01:51.106 net/ice: not in enabled drivers build config 00:01:51.106 net/idpf: not in enabled drivers build config 00:01:51.106 net/igc: not in enabled drivers build config 00:01:51.106 net/ionic: not in enabled drivers build config 00:01:51.106 net/ipn3ke: not in enabled drivers build config 00:01:51.106 net/ixgbe: not in enabled drivers build config 00:01:51.106 net/mana: not in enabled drivers build config 00:01:51.106 net/memif: not in enabled drivers build config 00:01:51.106 net/mlx4: not in enabled drivers build config 00:01:51.106 net/mlx5: not in enabled drivers build config 00:01:51.106 net/mvneta: not in enabled drivers build config 00:01:51.106 net/mvpp2: not in enabled drivers build config 00:01:51.106 net/netvsc: not in enabled drivers build config 00:01:51.106 net/nfb: not in enabled drivers build config 00:01:51.106 net/nfp: not in enabled drivers build config 00:01:51.106 net/ngbe: not in enabled drivers build config 00:01:51.106 net/null: not in enabled drivers build config 00:01:51.106 net/octeontx: not in enabled drivers build config 00:01:51.106 net/octeon_ep: not in enabled drivers build config 00:01:51.106 net/pcap: not in enabled drivers build config 00:01:51.106 net/pfe: not in enabled drivers build config 00:01:51.106 net/qede: not in enabled drivers build config 00:01:51.106 net/ring: not in enabled drivers build config 00:01:51.106 net/sfc: not in enabled drivers build config 00:01:51.106 net/softnic: not in enabled drivers build config 00:01:51.106 net/tap: not in enabled drivers build config 00:01:51.106 net/thunderx: not in enabled drivers build config 00:01:51.106 net/txgbe: not in enabled drivers build config 00:01:51.106 net/vdev_netvsc: not in enabled drivers build config 00:01:51.106 net/vhost: not in enabled drivers build config 00:01:51.106 net/virtio: not in enabled drivers build config 00:01:51.106 net/vmxnet3: not in enabled drivers build config 00:01:51.106 raw/*: missing internal dependency, "rawdev" 00:01:51.106 crypto/armv8: not in enabled drivers build config 00:01:51.106 crypto/bcmfs: not in enabled drivers build config 00:01:51.106 crypto/caam_jr: not in enabled drivers build config 00:01:51.106 crypto/ccp: not in enabled drivers build config 00:01:51.106 crypto/cnxk: not in enabled drivers build config 00:01:51.106 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.106 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.107 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.107 crypto/mlx5: not in enabled drivers build config 00:01:51.107 crypto/mvsam: not in enabled drivers build config 00:01:51.107 crypto/nitrox: not in enabled drivers build config 00:01:51.107 crypto/null: not in enabled drivers build config 00:01:51.107 crypto/octeontx: not in enabled drivers build config 00:01:51.107 crypto/openssl: not in enabled drivers build config 00:01:51.107 crypto/scheduler: not in enabled drivers build config 00:01:51.107 crypto/uadk: not in enabled drivers build config 00:01:51.107 crypto/virtio: not in enabled drivers build config 00:01:51.107 compress/isal: not in enabled drivers build config 00:01:51.107 compress/mlx5: not in enabled drivers build config 00:01:51.107 compress/octeontx: not in enabled drivers build config 00:01:51.107 compress/zlib: not in enabled drivers build config 00:01:51.107 regex/*: missing internal dependency, "regexdev" 00:01:51.107 ml/*: missing internal dependency, "mldev" 00:01:51.107 vdpa/ifc: not in enabled drivers build config 00:01:51.107 vdpa/mlx5: not in enabled drivers build config 00:01:51.107 vdpa/nfp: not in enabled drivers build config 00:01:51.107 vdpa/sfc: not in enabled drivers build config 00:01:51.107 event/*: missing internal dependency, "eventdev" 00:01:51.107 baseband/*: missing internal dependency, "bbdev" 00:01:51.107 gpu/*: missing internal dependency, "gpudev" 00:01:51.107 00:01:51.107 00:01:51.107 Build targets in project: 85 00:01:51.107 00:01:51.107 DPDK 23.11.0 00:01:51.107 00:01:51.107 User defined options 00:01:51.107 buildtype : debug 00:01:51.107 default_library : shared 00:01:51.107 libdir : lib 00:01:51.107 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.107 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.107 c_link_args : 00:01:51.107 cpu_instruction_set: native 00:01:51.107 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:51.107 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:51.107 enable_docs : false 00:01:51.107 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.107 enable_kmods : false 00:01:51.107 tests : false 00:01:51.107 00:01:51.107 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.107 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.107 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.107 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.107 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.107 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.107 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.107 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.107 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.107 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.107 [9/265] Linking static target lib/librte_kvargs.a 00:01:51.107 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.107 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.107 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.107 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.107 [14/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.107 [15/265] Linking static target lib/librte_log.a 00:01:51.107 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.107 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.107 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.107 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.107 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.366 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.628 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.889 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.889 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.889 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.889 [26/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.889 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.889 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.889 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.889 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.889 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.889 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.889 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.889 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.889 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.889 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.889 [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.889 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.889 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.889 [40/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.889 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.889 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.889 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.889 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.889 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.889 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.889 [47/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.889 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.889 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.889 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.889 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.151 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.151 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.151 [54/265] Linking static target lib/librte_telemetry.a 00:01:52.151 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.151 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.151 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.151 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.151 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.151 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.151 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.151 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.151 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.151 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.151 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.151 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.151 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.151 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.151 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.151 [70/265] Linking static target lib/librte_pci.a 00:01:52.409 [71/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.409 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.409 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.409 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.409 [75/265] Linking target lib/librte_log.so.24.0 00:01:52.409 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.409 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.409 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.409 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.409 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.409 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.409 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.409 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.671 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.671 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.671 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.671 [87/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.671 [88/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.671 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.671 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.936 [91/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.936 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.936 [93/265] Linking target lib/librte_kvargs.so.24.0 00:01:52.936 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.936 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.936 [96/265] Linking static target lib/librte_ring.a 00:01:52.936 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.936 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.936 [99/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.936 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.936 [101/265] Linking static target lib/librte_meter.a 00:01:52.936 [102/265] Linking static target lib/librte_eal.a 00:01:52.936 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.936 [104/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.936 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.936 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.936 [107/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.936 [108/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.200 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.200 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.200 [111/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.200 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.200 [113/265] Linking target lib/librte_telemetry.so.24.0 00:01:53.200 [114/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.200 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.200 [116/265] Linking static target lib/librte_mempool.a 00:01:53.200 [117/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.200 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.200 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.200 [120/265] Linking static target lib/librte_rcu.a 00:01:53.200 [121/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:53.200 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.200 [123/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.200 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.200 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.200 [126/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.200 [127/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.200 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.460 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.460 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.460 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.460 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.460 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.460 [134/265] Linking static target lib/librte_cmdline.a 00:01:53.460 [135/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:53.460 [136/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.460 [137/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.460 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.460 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.460 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.460 [141/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.722 [142/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.722 [143/265] Linking static target lib/librte_timer.a 00:01:53.722 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.722 [145/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.722 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.722 [147/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.722 [148/265] Linking static target lib/librte_net.a 00:01:53.722 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.722 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.983 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.983 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.983 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.983 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.983 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.983 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.983 [157/265] Linking static target lib/librte_dmadev.a 00:01:53.983 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.983 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.983 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.983 [161/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.241 [162/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.241 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:54.241 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.241 [165/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.241 [166/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.241 [167/265] Linking static target lib/librte_hash.a 00:01:54.241 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.241 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.241 [170/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.241 [171/265] Linking static target lib/librte_compressdev.a 00:01:54.242 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.242 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.242 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.242 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.242 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.242 [177/265] Linking static target lib/librte_power.a 00:01:54.500 [178/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.500 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.500 [180/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.500 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.500 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.500 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.500 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.500 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.500 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.500 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.500 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.500 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.759 [190/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.759 [191/265] Linking static target lib/librte_reorder.a 00:01:54.759 [192/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.759 [193/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.759 [194/265] Linking static target lib/librte_mbuf.a 00:01:54.759 [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.759 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.759 [197/265] Linking static target drivers/librte_bus_vdev.a 00:01:54.759 [198/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.759 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.759 [200/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.759 [201/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.759 [202/265] Linking static target lib/librte_security.a 00:01:54.759 [203/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.759 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.759 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.759 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.759 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.759 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:55.017 [209/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.017 [210/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.017 [211/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.017 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.017 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.017 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.017 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:55.017 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.017 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.017 [218/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.017 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.017 [220/265] Linking static target lib/librte_cryptodev.a 00:01:55.275 [221/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.275 [222/265] Linking static target lib/librte_ethdev.a 00:01:55.275 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.211 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.145 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.686 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.686 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.686 [228/265] Linking target lib/librte_eal.so.24.0 00:01:59.686 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:59.686 [230/265] Linking target lib/librte_pci.so.24.0 00:01:59.686 [231/265] Linking target lib/librte_meter.so.24.0 00:01:59.686 [232/265] Linking target lib/librte_ring.so.24.0 00:01:59.686 [233/265] Linking target lib/librte_timer.so.24.0 00:01:59.686 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:59.686 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:59.686 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:59.686 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:59.686 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:59.686 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:59.686 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:59.686 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:59.686 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:59.686 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:59.686 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:59.686 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:59.944 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:59.944 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:59.944 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:59.944 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:59.944 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:59.944 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:01:59.944 [252/265] Linking target lib/librte_net.so.24.0 00:02:00.203 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:00.203 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:00.203 [255/265] Linking target lib/librte_security.so.24.0 00:02:00.203 [256/265] Linking target lib/librte_cmdline.so.24.0 00:02:00.203 [257/265] Linking target lib/librte_hash.so.24.0 00:02:00.203 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:00.203 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:00.461 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.461 [261/265] Linking target lib/librte_power.so.24.0 00:02:03.188 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.188 [263/265] Linking static target lib/librte_vhost.a 00:02:04.121 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.121 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:04.121 INFO: autodetecting backend as ninja 00:02:04.121 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:05.054 CC lib/ut/ut.o 00:02:05.054 CC lib/ut_mock/mock.o 00:02:05.054 CC lib/log/log.o 00:02:05.054 CC lib/log/log_flags.o 00:02:05.054 CC lib/log/log_deprecated.o 00:02:05.313 LIB libspdk_ut_mock.a 00:02:05.313 LIB libspdk_ut.a 00:02:05.313 LIB libspdk_log.a 00:02:05.313 SO libspdk_ut_mock.so.6.0 00:02:05.313 SO libspdk_ut.so.2.0 00:02:05.313 SO libspdk_log.so.7.0 00:02:05.313 SYMLINK libspdk_ut_mock.so 00:02:05.313 SYMLINK libspdk_ut.so 00:02:05.313 SYMLINK libspdk_log.so 00:02:05.571 CC lib/ioat/ioat.o 00:02:05.571 CXX lib/trace_parser/trace.o 00:02:05.571 CC lib/dma/dma.o 00:02:05.571 CC lib/util/base64.o 00:02:05.571 CC lib/util/bit_array.o 00:02:05.571 CC lib/util/cpuset.o 00:02:05.571 CC lib/util/crc16.o 00:02:05.571 CC lib/util/crc32.o 00:02:05.571 CC lib/util/crc32c.o 00:02:05.571 CC lib/util/crc32_ieee.o 00:02:05.571 CC lib/util/crc64.o 00:02:05.571 CC lib/util/dif.o 00:02:05.571 CC lib/util/fd.o 00:02:05.571 CC lib/util/file.o 00:02:05.571 CC lib/util/hexlify.o 00:02:05.571 CC lib/util/iov.o 00:02:05.571 CC lib/util/math.o 00:02:05.571 CC lib/util/pipe.o 00:02:05.571 CC lib/util/strerror_tls.o 00:02:05.571 CC lib/util/string.o 00:02:05.571 CC lib/util/uuid.o 00:02:05.571 CC lib/util/fd_group.o 00:02:05.571 CC lib/util/xor.o 00:02:05.571 CC lib/util/zipf.o 00:02:05.571 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.571 CC lib/vfio_user/host/vfio_user.o 00:02:05.829 LIB libspdk_dma.a 00:02:05.829 SO libspdk_dma.so.4.0 00:02:05.829 SYMLINK libspdk_dma.so 00:02:05.829 LIB libspdk_ioat.a 00:02:05.829 SO libspdk_ioat.so.7.0 00:02:05.829 LIB libspdk_vfio_user.a 00:02:05.829 SYMLINK libspdk_ioat.so 00:02:05.829 SO libspdk_vfio_user.so.5.0 00:02:06.087 SYMLINK libspdk_vfio_user.so 00:02:06.087 LIB libspdk_util.a 00:02:06.087 SO libspdk_util.so.9.0 00:02:06.344 SYMLINK libspdk_util.so 00:02:06.344 CC lib/conf/conf.o 00:02:06.344 CC lib/json/json_parse.o 00:02:06.344 CC lib/env_dpdk/env.o 00:02:06.344 CC lib/rdma/common.o 00:02:06.344 CC lib/vmd/vmd.o 00:02:06.344 CC lib/json/json_util.o 00:02:06.344 CC lib/env_dpdk/memory.o 00:02:06.344 CC lib/idxd/idxd.o 00:02:06.344 CC lib/rdma/rdma_verbs.o 00:02:06.344 CC lib/vmd/led.o 00:02:06.344 CC lib/json/json_write.o 00:02:06.344 CC lib/env_dpdk/pci.o 00:02:06.344 CC lib/idxd/idxd_user.o 00:02:06.344 CC lib/env_dpdk/init.o 00:02:06.344 CC lib/env_dpdk/threads.o 00:02:06.344 CC lib/env_dpdk/pci_ioat.o 00:02:06.344 CC lib/env_dpdk/pci_virtio.o 00:02:06.344 CC lib/env_dpdk/pci_vmd.o 00:02:06.344 CC lib/env_dpdk/pci_idxd.o 00:02:06.344 CC lib/env_dpdk/pci_event.o 00:02:06.344 CC lib/env_dpdk/sigbus_handler.o 00:02:06.345 CC lib/env_dpdk/pci_dpdk.o 00:02:06.345 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.345 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.602 LIB libspdk_trace_parser.a 00:02:06.602 SO libspdk_trace_parser.so.5.0 00:02:06.602 SYMLINK libspdk_trace_parser.so 00:02:06.602 LIB libspdk_conf.a 00:02:06.861 SO libspdk_conf.so.6.0 00:02:06.861 LIB libspdk_rdma.a 00:02:06.861 LIB libspdk_json.a 00:02:06.861 SYMLINK libspdk_conf.so 00:02:06.861 SO libspdk_rdma.so.6.0 00:02:06.861 SO libspdk_json.so.6.0 00:02:06.861 SYMLINK libspdk_rdma.so 00:02:06.861 SYMLINK libspdk_json.so 00:02:07.118 LIB libspdk_idxd.a 00:02:07.119 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.119 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.119 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.119 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.119 SO libspdk_idxd.so.12.0 00:02:07.119 SYMLINK libspdk_idxd.so 00:02:07.119 LIB libspdk_vmd.a 00:02:07.119 SO libspdk_vmd.so.6.0 00:02:07.119 SYMLINK libspdk_vmd.so 00:02:07.376 LIB libspdk_jsonrpc.a 00:02:07.376 SO libspdk_jsonrpc.so.6.0 00:02:07.376 SYMLINK libspdk_jsonrpc.so 00:02:07.634 CC lib/rpc/rpc.o 00:02:07.891 LIB libspdk_rpc.a 00:02:07.891 SO libspdk_rpc.so.6.0 00:02:07.891 SYMLINK libspdk_rpc.so 00:02:08.149 CC lib/trace/trace.o 00:02:08.149 CC lib/notify/notify.o 00:02:08.149 CC lib/keyring/keyring.o 00:02:08.149 CC lib/notify/notify_rpc.o 00:02:08.149 CC lib/trace/trace_flags.o 00:02:08.149 CC lib/keyring/keyring_rpc.o 00:02:08.149 CC lib/trace/trace_rpc.o 00:02:08.149 LIB libspdk_notify.a 00:02:08.149 SO libspdk_notify.so.6.0 00:02:08.407 LIB libspdk_keyring.a 00:02:08.407 LIB libspdk_trace.a 00:02:08.407 SYMLINK libspdk_notify.so 00:02:08.407 SO libspdk_keyring.so.1.0 00:02:08.407 SO libspdk_trace.so.10.0 00:02:08.407 SYMLINK libspdk_keyring.so 00:02:08.407 SYMLINK libspdk_trace.so 00:02:08.407 LIB libspdk_env_dpdk.a 00:02:08.407 CC lib/thread/thread.o 00:02:08.407 CC lib/sock/sock.o 00:02:08.407 CC lib/thread/iobuf.o 00:02:08.407 CC lib/sock/sock_rpc.o 00:02:08.665 SO libspdk_env_dpdk.so.14.0 00:02:08.665 SYMLINK libspdk_env_dpdk.so 00:02:08.923 LIB libspdk_sock.a 00:02:08.923 SO libspdk_sock.so.9.0 00:02:08.923 SYMLINK libspdk_sock.so 00:02:09.181 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.181 CC lib/nvme/nvme_ctrlr.o 00:02:09.181 CC lib/nvme/nvme_fabric.o 00:02:09.181 CC lib/nvme/nvme_ns_cmd.o 00:02:09.181 CC lib/nvme/nvme_ns.o 00:02:09.181 CC lib/nvme/nvme_pcie_common.o 00:02:09.181 CC lib/nvme/nvme_pcie.o 00:02:09.181 CC lib/nvme/nvme_qpair.o 00:02:09.181 CC lib/nvme/nvme.o 00:02:09.181 CC lib/nvme/nvme_quirks.o 00:02:09.181 CC lib/nvme/nvme_transport.o 00:02:09.181 CC lib/nvme/nvme_discovery.o 00:02:09.181 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.181 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.181 CC lib/nvme/nvme_tcp.o 00:02:09.181 CC lib/nvme/nvme_opal.o 00:02:09.181 CC lib/nvme/nvme_io_msg.o 00:02:09.181 CC lib/nvme/nvme_poll_group.o 00:02:09.181 CC lib/nvme/nvme_zns.o 00:02:09.181 CC lib/nvme/nvme_stubs.o 00:02:09.181 CC lib/nvme/nvme_auth.o 00:02:09.181 CC lib/nvme/nvme_cuse.o 00:02:09.181 CC lib/nvme/nvme_vfio_user.o 00:02:09.181 CC lib/nvme/nvme_rdma.o 00:02:10.118 LIB libspdk_thread.a 00:02:10.118 SO libspdk_thread.so.10.0 00:02:10.118 SYMLINK libspdk_thread.so 00:02:10.376 CC lib/vfu_tgt/tgt_endpoint.o 00:02:10.376 CC lib/blob/blobstore.o 00:02:10.376 CC lib/accel/accel.o 00:02:10.376 CC lib/vfu_tgt/tgt_rpc.o 00:02:10.376 CC lib/blob/request.o 00:02:10.376 CC lib/accel/accel_rpc.o 00:02:10.376 CC lib/blob/zeroes.o 00:02:10.376 CC lib/accel/accel_sw.o 00:02:10.376 CC lib/blob/blob_bs_dev.o 00:02:10.376 CC lib/virtio/virtio.o 00:02:10.376 CC lib/init/json_config.o 00:02:10.376 CC lib/virtio/virtio_vhost_user.o 00:02:10.376 CC lib/init/subsystem.o 00:02:10.376 CC lib/virtio/virtio_vfio_user.o 00:02:10.376 CC lib/init/subsystem_rpc.o 00:02:10.376 CC lib/virtio/virtio_pci.o 00:02:10.376 CC lib/init/rpc.o 00:02:10.634 LIB libspdk_init.a 00:02:10.634 SO libspdk_init.so.5.0 00:02:10.634 LIB libspdk_vfu_tgt.a 00:02:10.634 LIB libspdk_virtio.a 00:02:10.634 SYMLINK libspdk_init.so 00:02:10.634 SO libspdk_vfu_tgt.so.3.0 00:02:10.634 SO libspdk_virtio.so.7.0 00:02:10.891 SYMLINK libspdk_vfu_tgt.so 00:02:10.891 SYMLINK libspdk_virtio.so 00:02:10.891 CC lib/event/app.o 00:02:10.891 CC lib/event/reactor.o 00:02:10.891 CC lib/event/log_rpc.o 00:02:10.891 CC lib/event/app_rpc.o 00:02:10.891 CC lib/event/scheduler_static.o 00:02:11.149 LIB libspdk_event.a 00:02:11.407 SO libspdk_event.so.13.0 00:02:11.407 SYMLINK libspdk_event.so 00:02:11.407 LIB libspdk_accel.a 00:02:11.407 SO libspdk_accel.so.15.0 00:02:11.407 SYMLINK libspdk_accel.so 00:02:11.675 LIB libspdk_nvme.a 00:02:11.675 CC lib/bdev/bdev.o 00:02:11.675 CC lib/bdev/bdev_rpc.o 00:02:11.675 CC lib/bdev/bdev_zone.o 00:02:11.675 CC lib/bdev/part.o 00:02:11.675 CC lib/bdev/scsi_nvme.o 00:02:11.675 SO libspdk_nvme.so.13.0 00:02:11.934 SYMLINK libspdk_nvme.so 00:02:13.307 LIB libspdk_blob.a 00:02:13.307 SO libspdk_blob.so.11.0 00:02:13.307 SYMLINK libspdk_blob.so 00:02:13.564 CC lib/blobfs/blobfs.o 00:02:13.564 CC lib/blobfs/tree.o 00:02:13.564 CC lib/lvol/lvol.o 00:02:14.130 LIB libspdk_bdev.a 00:02:14.130 LIB libspdk_blobfs.a 00:02:14.130 SO libspdk_bdev.so.15.0 00:02:14.130 SO libspdk_blobfs.so.10.0 00:02:14.394 SYMLINK libspdk_blobfs.so 00:02:14.394 SYMLINK libspdk_bdev.so 00:02:14.394 LIB libspdk_lvol.a 00:02:14.394 SO libspdk_lvol.so.10.0 00:02:14.394 SYMLINK libspdk_lvol.so 00:02:14.394 CC lib/nvmf/ctrlr.o 00:02:14.394 CC lib/nbd/nbd.o 00:02:14.394 CC lib/ublk/ublk.o 00:02:14.394 CC lib/scsi/dev.o 00:02:14.394 CC lib/ublk/ublk_rpc.o 00:02:14.394 CC lib/nvmf/ctrlr_discovery.o 00:02:14.394 CC lib/nbd/nbd_rpc.o 00:02:14.394 CC lib/scsi/lun.o 00:02:14.394 CC lib/ftl/ftl_core.o 00:02:14.394 CC lib/nvmf/ctrlr_bdev.o 00:02:14.394 CC lib/ftl/ftl_init.o 00:02:14.394 CC lib/scsi/port.o 00:02:14.394 CC lib/nvmf/subsystem.o 00:02:14.394 CC lib/ftl/ftl_layout.o 00:02:14.394 CC lib/nvmf/nvmf.o 00:02:14.394 CC lib/scsi/scsi.o 00:02:14.394 CC lib/nvmf/nvmf_rpc.o 00:02:14.394 CC lib/ftl/ftl_debug.o 00:02:14.394 CC lib/scsi/scsi_bdev.o 00:02:14.394 CC lib/nvmf/transport.o 00:02:14.394 CC lib/scsi/scsi_pr.o 00:02:14.394 CC lib/ftl/ftl_io.o 00:02:14.394 CC lib/scsi/scsi_rpc.o 00:02:14.394 CC lib/ftl/ftl_sb.o 00:02:14.394 CC lib/nvmf/tcp.o 00:02:14.394 CC lib/nvmf/vfio_user.o 00:02:14.394 CC lib/scsi/task.o 00:02:14.394 CC lib/ftl/ftl_l2p.o 00:02:14.394 CC lib/ftl/ftl_l2p_flat.o 00:02:14.394 CC lib/nvmf/rdma.o 00:02:14.394 CC lib/ftl/ftl_nv_cache.o 00:02:14.394 CC lib/ftl/ftl_band.o 00:02:14.394 CC lib/ftl/ftl_band_ops.o 00:02:14.394 CC lib/ftl/ftl_writer.o 00:02:14.394 CC lib/ftl/ftl_rq.o 00:02:14.394 CC lib/ftl/ftl_reloc.o 00:02:14.394 CC lib/ftl/ftl_l2p_cache.o 00:02:14.394 CC lib/ftl/ftl_p2l.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:14.394 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:14.976 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:14.976 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:14.976 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:14.976 CC lib/ftl/utils/ftl_conf.o 00:02:14.976 CC lib/ftl/utils/ftl_md.o 00:02:14.976 CC lib/ftl/utils/ftl_mempool.o 00:02:14.976 CC lib/ftl/utils/ftl_bitmap.o 00:02:14.976 CC lib/ftl/utils/ftl_property.o 00:02:14.976 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:14.976 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:14.976 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:14.976 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:14.976 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:14.976 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:14.976 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:14.976 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:14.976 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:14.976 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:14.976 CC lib/ftl/base/ftl_base_dev.o 00:02:14.976 CC lib/ftl/base/ftl_base_bdev.o 00:02:14.976 CC lib/ftl/ftl_trace.o 00:02:15.234 LIB libspdk_nbd.a 00:02:15.234 SO libspdk_nbd.so.7.0 00:02:15.234 SYMLINK libspdk_nbd.so 00:02:15.492 LIB libspdk_scsi.a 00:02:15.492 SO libspdk_scsi.so.9.0 00:02:15.492 LIB libspdk_ublk.a 00:02:15.492 SO libspdk_ublk.so.3.0 00:02:15.492 SYMLINK libspdk_ublk.so 00:02:15.492 SYMLINK libspdk_scsi.so 00:02:15.778 CC lib/vhost/vhost.o 00:02:15.779 CC lib/iscsi/conn.o 00:02:15.779 CC lib/iscsi/init_grp.o 00:02:15.779 CC lib/vhost/vhost_rpc.o 00:02:15.779 CC lib/vhost/vhost_scsi.o 00:02:15.779 CC lib/iscsi/iscsi.o 00:02:15.779 CC lib/iscsi/md5.o 00:02:15.779 CC lib/vhost/vhost_blk.o 00:02:15.779 CC lib/vhost/rte_vhost_user.o 00:02:15.779 CC lib/iscsi/param.o 00:02:15.779 CC lib/iscsi/portal_grp.o 00:02:15.779 CC lib/iscsi/tgt_node.o 00:02:15.779 CC lib/iscsi/iscsi_subsystem.o 00:02:15.779 CC lib/iscsi/iscsi_rpc.o 00:02:15.779 CC lib/iscsi/task.o 00:02:15.779 LIB libspdk_ftl.a 00:02:16.036 SO libspdk_ftl.so.9.0 00:02:16.294 SYMLINK libspdk_ftl.so 00:02:16.859 LIB libspdk_vhost.a 00:02:16.859 SO libspdk_vhost.so.8.0 00:02:17.117 LIB libspdk_nvmf.a 00:02:17.117 SYMLINK libspdk_vhost.so 00:02:17.117 SO libspdk_nvmf.so.18.0 00:02:17.117 LIB libspdk_iscsi.a 00:02:17.117 SO libspdk_iscsi.so.8.0 00:02:17.374 SYMLINK libspdk_nvmf.so 00:02:17.374 SYMLINK libspdk_iscsi.so 00:02:17.633 CC module/vfu_device/vfu_virtio.o 00:02:17.633 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.633 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.633 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.633 CC module/vfu_device/vfu_virtio_rpc.o 00:02:17.633 CC module/accel/ioat/accel_ioat.o 00:02:17.633 CC module/keyring/file/keyring.o 00:02:17.633 CC module/sock/posix/posix.o 00:02:17.633 CC module/accel/iaa/accel_iaa.o 00:02:17.633 CC module/accel/error/accel_error.o 00:02:17.633 CC module/accel/dsa/accel_dsa.o 00:02:17.633 CC module/blob/bdev/blob_bdev.o 00:02:17.633 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.633 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.633 CC module/keyring/file/keyring_rpc.o 00:02:17.633 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.633 CC module/accel/error/accel_error_rpc.o 00:02:17.633 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.633 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.633 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.633 LIB libspdk_env_dpdk_rpc.a 00:02:17.633 SO libspdk_env_dpdk_rpc.so.6.0 00:02:17.891 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.891 LIB libspdk_keyring_file.a 00:02:17.891 LIB libspdk_scheduler_gscheduler.a 00:02:17.891 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.891 SO libspdk_scheduler_gscheduler.so.4.0 00:02:17.891 SO libspdk_keyring_file.so.1.0 00:02:17.891 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:17.891 LIB libspdk_accel_error.a 00:02:17.891 LIB libspdk_accel_ioat.a 00:02:17.891 LIB libspdk_scheduler_dynamic.a 00:02:17.891 LIB libspdk_accel_iaa.a 00:02:17.891 SO libspdk_accel_error.so.2.0 00:02:17.891 SO libspdk_accel_ioat.so.6.0 00:02:17.891 SO libspdk_scheduler_dynamic.so.4.0 00:02:17.891 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.891 SYMLINK libspdk_keyring_file.so 00:02:17.891 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.891 SO libspdk_accel_iaa.so.3.0 00:02:17.891 LIB libspdk_accel_dsa.a 00:02:17.891 SYMLINK libspdk_accel_error.so 00:02:17.891 LIB libspdk_blob_bdev.a 00:02:17.891 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.891 SO libspdk_accel_dsa.so.5.0 00:02:17.891 SYMLINK libspdk_accel_ioat.so 00:02:17.891 SYMLINK libspdk_accel_iaa.so 00:02:17.891 SO libspdk_blob_bdev.so.11.0 00:02:17.891 SYMLINK libspdk_accel_dsa.so 00:02:17.891 SYMLINK libspdk_blob_bdev.so 00:02:18.190 LIB libspdk_vfu_device.a 00:02:18.190 SO libspdk_vfu_device.so.3.0 00:02:18.190 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.190 CC module/bdev/delay/vbdev_delay.o 00:02:18.190 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.190 CC module/bdev/nvme/bdev_nvme.o 00:02:18.190 CC module/bdev/gpt/gpt.o 00:02:18.190 CC module/bdev/raid/bdev_raid.o 00:02:18.190 CC module/bdev/ftl/bdev_ftl.o 00:02:18.190 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.190 CC module/bdev/malloc/bdev_malloc.o 00:02:18.190 CC module/bdev/null/bdev_null.o 00:02:18.190 CC module/bdev/aio/bdev_aio.o 00:02:18.190 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.190 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.190 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.190 CC module/bdev/error/vbdev_error.o 00:02:18.190 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.190 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.190 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.190 CC module/bdev/null/bdev_null_rpc.o 00:02:18.190 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.190 CC module/bdev/nvme/nvme_rpc.o 00:02:18.190 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.190 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.190 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.190 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.190 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.190 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.190 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.190 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.190 CC module/bdev/raid/raid0.o 00:02:18.190 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.190 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.190 CC module/bdev/raid/raid1.o 00:02:18.190 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.190 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.190 CC module/bdev/split/vbdev_split.o 00:02:18.190 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.190 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.190 CC module/bdev/nvme/vbdev_opal.o 00:02:18.190 CC module/bdev/raid/concat.o 00:02:18.190 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.190 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.476 SYMLINK libspdk_vfu_device.so 00:02:18.476 LIB libspdk_sock_posix.a 00:02:18.476 SO libspdk_sock_posix.so.6.0 00:02:18.740 LIB libspdk_blobfs_bdev.a 00:02:18.740 SO libspdk_blobfs_bdev.so.6.0 00:02:18.740 SYMLINK libspdk_sock_posix.so 00:02:18.740 LIB libspdk_bdev_split.a 00:02:18.740 SYMLINK libspdk_blobfs_bdev.so 00:02:18.740 SO libspdk_bdev_split.so.6.0 00:02:18.740 LIB libspdk_bdev_null.a 00:02:18.740 LIB libspdk_bdev_gpt.a 00:02:18.740 LIB libspdk_bdev_error.a 00:02:18.740 SO libspdk_bdev_null.so.6.0 00:02:18.740 LIB libspdk_bdev_passthru.a 00:02:18.740 LIB libspdk_bdev_ftl.a 00:02:18.740 SYMLINK libspdk_bdev_split.so 00:02:18.740 SO libspdk_bdev_gpt.so.6.0 00:02:18.740 SO libspdk_bdev_error.so.6.0 00:02:18.740 SO libspdk_bdev_passthru.so.6.0 00:02:18.740 SO libspdk_bdev_ftl.so.6.0 00:02:18.740 LIB libspdk_bdev_zone_block.a 00:02:18.740 LIB libspdk_bdev_aio.a 00:02:18.740 SYMLINK libspdk_bdev_null.so 00:02:18.740 LIB libspdk_bdev_malloc.a 00:02:18.740 SYMLINK libspdk_bdev_error.so 00:02:18.740 SYMLINK libspdk_bdev_gpt.so 00:02:18.740 SO libspdk_bdev_zone_block.so.6.0 00:02:18.740 SO libspdk_bdev_aio.so.6.0 00:02:18.999 SYMLINK libspdk_bdev_passthru.so 00:02:18.999 SO libspdk_bdev_malloc.so.6.0 00:02:18.999 SYMLINK libspdk_bdev_ftl.so 00:02:18.999 LIB libspdk_bdev_iscsi.a 00:02:18.999 SYMLINK libspdk_bdev_zone_block.so 00:02:18.999 SYMLINK libspdk_bdev_aio.so 00:02:18.999 LIB libspdk_bdev_delay.a 00:02:18.999 SO libspdk_bdev_iscsi.so.6.0 00:02:18.999 SYMLINK libspdk_bdev_malloc.so 00:02:18.999 SO libspdk_bdev_delay.so.6.0 00:02:18.999 SYMLINK libspdk_bdev_iscsi.so 00:02:18.999 SYMLINK libspdk_bdev_delay.so 00:02:18.999 LIB libspdk_bdev_lvol.a 00:02:18.999 LIB libspdk_bdev_virtio.a 00:02:18.999 SO libspdk_bdev_lvol.so.6.0 00:02:18.999 SO libspdk_bdev_virtio.so.6.0 00:02:19.257 SYMLINK libspdk_bdev_lvol.so 00:02:19.257 SYMLINK libspdk_bdev_virtio.so 00:02:19.514 LIB libspdk_bdev_raid.a 00:02:19.514 SO libspdk_bdev_raid.so.6.0 00:02:19.514 SYMLINK libspdk_bdev_raid.so 00:02:20.449 LIB libspdk_bdev_nvme.a 00:02:20.708 SO libspdk_bdev_nvme.so.7.0 00:02:20.708 SYMLINK libspdk_bdev_nvme.so 00:02:20.966 CC module/event/subsystems/sock/sock.o 00:02:20.966 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.966 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.966 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.966 CC module/event/subsystems/keyring/keyring.o 00:02:20.966 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.966 CC module/event/subsystems/vmd/vmd.o 00:02:20.966 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.966 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.224 LIB libspdk_event_keyring.a 00:02:21.224 LIB libspdk_event_vhost_blk.a 00:02:21.224 LIB libspdk_event_sock.a 00:02:21.224 LIB libspdk_event_vfu_tgt.a 00:02:21.224 LIB libspdk_event_scheduler.a 00:02:21.224 LIB libspdk_event_vmd.a 00:02:21.224 SO libspdk_event_keyring.so.1.0 00:02:21.224 LIB libspdk_event_iobuf.a 00:02:21.224 SO libspdk_event_vhost_blk.so.3.0 00:02:21.224 SO libspdk_event_sock.so.5.0 00:02:21.224 SO libspdk_event_vfu_tgt.so.3.0 00:02:21.224 SO libspdk_event_scheduler.so.4.0 00:02:21.224 SO libspdk_event_vmd.so.6.0 00:02:21.224 SO libspdk_event_iobuf.so.3.0 00:02:21.224 SYMLINK libspdk_event_keyring.so 00:02:21.224 SYMLINK libspdk_event_sock.so 00:02:21.224 SYMLINK libspdk_event_vhost_blk.so 00:02:21.224 SYMLINK libspdk_event_vfu_tgt.so 00:02:21.224 SYMLINK libspdk_event_scheduler.so 00:02:21.224 SYMLINK libspdk_event_vmd.so 00:02:21.224 SYMLINK libspdk_event_iobuf.so 00:02:21.483 CC module/event/subsystems/accel/accel.o 00:02:21.741 LIB libspdk_event_accel.a 00:02:21.741 SO libspdk_event_accel.so.6.0 00:02:21.741 SYMLINK libspdk_event_accel.so 00:02:21.998 CC module/event/subsystems/bdev/bdev.o 00:02:21.998 LIB libspdk_event_bdev.a 00:02:21.998 SO libspdk_event_bdev.so.6.0 00:02:22.256 SYMLINK libspdk_event_bdev.so 00:02:22.256 CC module/event/subsystems/ublk/ublk.o 00:02:22.256 CC module/event/subsystems/nbd/nbd.o 00:02:22.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.256 CC module/event/subsystems/scsi/scsi.o 00:02:22.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.514 LIB libspdk_event_nbd.a 00:02:22.514 LIB libspdk_event_ublk.a 00:02:22.514 SO libspdk_event_nbd.so.6.0 00:02:22.514 LIB libspdk_event_scsi.a 00:02:22.514 SO libspdk_event_ublk.so.3.0 00:02:22.514 SO libspdk_event_scsi.so.6.0 00:02:22.514 SYMLINK libspdk_event_nbd.so 00:02:22.514 SYMLINK libspdk_event_ublk.so 00:02:22.514 SYMLINK libspdk_event_scsi.so 00:02:22.514 LIB libspdk_event_nvmf.a 00:02:22.514 SO libspdk_event_nvmf.so.6.0 00:02:22.514 SYMLINK libspdk_event_nvmf.so 00:02:22.773 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.773 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.773 LIB libspdk_event_vhost_scsi.a 00:02:22.773 LIB libspdk_event_iscsi.a 00:02:22.773 SO libspdk_event_vhost_scsi.so.3.0 00:02:22.773 SO libspdk_event_iscsi.so.6.0 00:02:23.032 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.032 SYMLINK libspdk_event_iscsi.so 00:02:23.032 SO libspdk.so.6.0 00:02:23.032 SYMLINK libspdk.so 00:02:23.294 CXX app/trace/trace.o 00:02:23.294 CC app/trace_record/trace_record.o 00:02:23.294 CC app/spdk_nvme_perf/perf.o 00:02:23.294 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.294 CC app/spdk_nvme_identify/identify.o 00:02:23.294 CC app/spdk_top/spdk_top.o 00:02:23.294 TEST_HEADER include/spdk/accel.h 00:02:23.294 CC test/rpc_client/rpc_client_test.o 00:02:23.294 CC app/spdk_lspci/spdk_lspci.o 00:02:23.294 TEST_HEADER include/spdk/accel_module.h 00:02:23.294 TEST_HEADER include/spdk/assert.h 00:02:23.294 TEST_HEADER include/spdk/barrier.h 00:02:23.294 TEST_HEADER include/spdk/base64.h 00:02:23.294 TEST_HEADER include/spdk/bdev.h 00:02:23.294 TEST_HEADER include/spdk/bdev_module.h 00:02:23.294 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.294 TEST_HEADER include/spdk/bit_array.h 00:02:23.294 TEST_HEADER include/spdk/bit_pool.h 00:02:23.294 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.294 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.294 TEST_HEADER include/spdk/blobfs.h 00:02:23.294 TEST_HEADER include/spdk/blob.h 00:02:23.294 TEST_HEADER include/spdk/conf.h 00:02:23.294 TEST_HEADER include/spdk/config.h 00:02:23.294 TEST_HEADER include/spdk/cpuset.h 00:02:23.294 TEST_HEADER include/spdk/crc16.h 00:02:23.294 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.294 CC app/spdk_dd/spdk_dd.o 00:02:23.294 TEST_HEADER include/spdk/crc32.h 00:02:23.294 TEST_HEADER include/spdk/crc64.h 00:02:23.294 TEST_HEADER include/spdk/dif.h 00:02:23.294 TEST_HEADER include/spdk/dma.h 00:02:23.294 TEST_HEADER include/spdk/endian.h 00:02:23.294 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.294 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.294 TEST_HEADER include/spdk/env.h 00:02:23.294 TEST_HEADER include/spdk/event.h 00:02:23.294 TEST_HEADER include/spdk/fd_group.h 00:02:23.294 TEST_HEADER include/spdk/fd.h 00:02:23.294 TEST_HEADER include/spdk/file.h 00:02:23.294 CC app/nvmf_tgt/nvmf_main.o 00:02:23.294 TEST_HEADER include/spdk/ftl.h 00:02:23.294 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.294 TEST_HEADER include/spdk/hexlify.h 00:02:23.294 CC app/vhost/vhost.o 00:02:23.294 TEST_HEADER include/spdk/histogram_data.h 00:02:23.294 TEST_HEADER include/spdk/idxd.h 00:02:23.294 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.295 TEST_HEADER include/spdk/init.h 00:02:23.295 TEST_HEADER include/spdk/ioat.h 00:02:23.295 CC examples/ioat/perf/perf.o 00:02:23.295 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.295 CC examples/nvme/arbitration/arbitration.o 00:02:23.295 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.295 CC examples/vmd/lsvmd/lsvmd.o 00:02:23.295 CC examples/ioat/verify/verify.o 00:02:23.295 CC app/spdk_tgt/spdk_tgt.o 00:02:23.295 TEST_HEADER include/spdk/json.h 00:02:23.295 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:23.295 CC examples/nvme/reconnect/reconnect.o 00:02:23.295 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.295 CC examples/nvme/hotplug/hotplug.o 00:02:23.295 CC examples/accel/perf/accel_perf.o 00:02:23.295 CC examples/sock/hello_world/hello_sock.o 00:02:23.295 CC examples/util/zipf/zipf.o 00:02:23.295 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:23.295 TEST_HEADER include/spdk/keyring.h 00:02:23.295 CC examples/nvme/hello_world/hello_world.o 00:02:23.295 CC examples/idxd/perf/perf.o 00:02:23.295 TEST_HEADER include/spdk/keyring_module.h 00:02:23.295 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:23.295 CC examples/nvme/abort/abort.o 00:02:23.295 TEST_HEADER include/spdk/likely.h 00:02:23.559 CC examples/vmd/led/led.o 00:02:23.559 CC app/fio/nvme/fio_plugin.o 00:02:23.559 CC test/event/event_perf/event_perf.o 00:02:23.559 TEST_HEADER include/spdk/log.h 00:02:23.559 TEST_HEADER include/spdk/lvol.h 00:02:23.560 TEST_HEADER include/spdk/memory.h 00:02:23.560 CC test/thread/poller_perf/poller_perf.o 00:02:23.560 TEST_HEADER include/spdk/mmio.h 00:02:23.560 CC test/nvme/aer/aer.o 00:02:23.560 TEST_HEADER include/spdk/nbd.h 00:02:23.560 TEST_HEADER include/spdk/notify.h 00:02:23.560 TEST_HEADER include/spdk/nvme.h 00:02:23.560 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.560 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.560 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.560 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.560 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.560 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.560 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.560 TEST_HEADER include/spdk/nvmf.h 00:02:23.560 CC test/bdev/bdevio/bdevio.o 00:02:23.560 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.560 CC examples/nvmf/nvmf/nvmf.o 00:02:23.560 CC test/blobfs/mkfs/mkfs.o 00:02:23.560 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.560 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.560 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.560 CC test/accel/dif/dif.o 00:02:23.560 CC examples/thread/thread/thread_ex.o 00:02:23.560 CC examples/blob/cli/blobcli.o 00:02:23.560 TEST_HEADER include/spdk/opal.h 00:02:23.560 TEST_HEADER include/spdk/opal_spec.h 00:02:23.560 CC test/dma/test_dma/test_dma.o 00:02:23.560 TEST_HEADER include/spdk/pci_ids.h 00:02:23.560 CC test/app/bdev_svc/bdev_svc.o 00:02:23.560 CC examples/blob/hello_world/hello_blob.o 00:02:23.560 TEST_HEADER include/spdk/pipe.h 00:02:23.560 TEST_HEADER include/spdk/queue.h 00:02:23.560 TEST_HEADER include/spdk/reduce.h 00:02:23.560 TEST_HEADER include/spdk/rpc.h 00:02:23.560 TEST_HEADER include/spdk/scheduler.h 00:02:23.560 TEST_HEADER include/spdk/scsi.h 00:02:23.560 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.560 TEST_HEADER include/spdk/sock.h 00:02:23.560 TEST_HEADER include/spdk/stdinc.h 00:02:23.560 TEST_HEADER include/spdk/string.h 00:02:23.560 TEST_HEADER include/spdk/thread.h 00:02:23.560 TEST_HEADER include/spdk/trace.h 00:02:23.560 TEST_HEADER include/spdk/trace_parser.h 00:02:23.560 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.560 TEST_HEADER include/spdk/tree.h 00:02:23.560 TEST_HEADER include/spdk/ublk.h 00:02:23.560 TEST_HEADER include/spdk/util.h 00:02:23.560 TEST_HEADER include/spdk/uuid.h 00:02:23.560 TEST_HEADER include/spdk/version.h 00:02:23.560 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.560 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.560 TEST_HEADER include/spdk/vhost.h 00:02:23.560 LINK spdk_lspci 00:02:23.560 TEST_HEADER include/spdk/vmd.h 00:02:23.560 TEST_HEADER include/spdk/xor.h 00:02:23.560 TEST_HEADER include/spdk/zipf.h 00:02:23.560 CXX test/cpp_headers/accel.o 00:02:23.560 CC test/lvol/esnap/esnap.o 00:02:23.560 LINK rpc_client_test 00:02:23.825 LINK spdk_nvme_discover 00:02:23.825 LINK lsvmd 00:02:23.825 LINK interrupt_tgt 00:02:23.825 LINK event_perf 00:02:23.825 LINK zipf 00:02:23.825 LINK poller_perf 00:02:23.825 LINK spdk_trace_record 00:02:23.825 LINK nvmf_tgt 00:02:23.825 LINK led 00:02:23.825 LINK vhost 00:02:23.825 LINK iscsi_tgt 00:02:23.825 LINK cmb_copy 00:02:23.825 LINK pmr_persistence 00:02:23.825 LINK ioat_perf 00:02:23.825 LINK spdk_tgt 00:02:23.825 LINK verify 00:02:23.825 LINK hello_world 00:02:23.825 LINK mkfs 00:02:23.825 LINK hotplug 00:02:23.825 LINK bdev_svc 00:02:23.825 LINK hello_sock 00:02:24.088 LINK hello_bdev 00:02:24.088 LINK hello_blob 00:02:24.088 CXX test/cpp_headers/accel_module.o 00:02:24.088 LINK thread 00:02:24.088 LINK spdk_dd 00:02:24.088 LINK aer 00:02:24.088 LINK arbitration 00:02:24.088 LINK idxd_perf 00:02:24.088 LINK nvmf 00:02:24.088 LINK reconnect 00:02:24.088 CC test/env/vtophys/vtophys.o 00:02:24.088 CXX test/cpp_headers/assert.o 00:02:24.088 LINK spdk_trace 00:02:24.088 LINK abort 00:02:24.356 LINK bdevio 00:02:24.356 LINK dif 00:02:24.356 CC test/event/reactor/reactor.o 00:02:24.356 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.356 LINK test_dma 00:02:24.356 CC test/app/jsoncat/jsoncat.o 00:02:24.356 CC test/app/histogram_perf/histogram_perf.o 00:02:24.356 CC app/fio/bdev/fio_plugin.o 00:02:24.356 CC test/event/reactor_perf/reactor_perf.o 00:02:24.356 CXX test/cpp_headers/barrier.o 00:02:24.356 LINK accel_perf 00:02:24.356 CXX test/cpp_headers/base64.o 00:02:24.356 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.356 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.356 LINK nvme_manage 00:02:24.356 CC test/nvme/reset/reset.o 00:02:24.356 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.356 CC test/env/memory/memory_ut.o 00:02:24.356 CXX test/cpp_headers/bdev.o 00:02:24.356 CC test/event/app_repeat/app_repeat.o 00:02:24.623 LINK vtophys 00:02:24.623 CC test/app/stub/stub.o 00:02:24.623 CXX test/cpp_headers/bdev_module.o 00:02:24.623 CC test/env/pci/pci_ut.o 00:02:24.623 LINK blobcli 00:02:24.623 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.623 CXX test/cpp_headers/bdev_zone.o 00:02:24.623 CC test/nvme/sgl/sgl.o 00:02:24.623 CC test/nvme/e2edp/nvme_dp.o 00:02:24.623 LINK reactor 00:02:24.623 LINK spdk_nvme 00:02:24.623 CXX test/cpp_headers/bit_array.o 00:02:24.623 CC test/event/scheduler/scheduler.o 00:02:24.623 CXX test/cpp_headers/bit_pool.o 00:02:24.623 CXX test/cpp_headers/blob_bdev.o 00:02:24.623 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.623 CC test/nvme/overhead/overhead.o 00:02:24.623 CXX test/cpp_headers/blobfs.o 00:02:24.623 LINK env_dpdk_post_init 00:02:24.623 LINK histogram_perf 00:02:24.623 LINK jsoncat 00:02:24.623 CC test/nvme/err_injection/err_injection.o 00:02:24.623 LINK reactor_perf 00:02:24.623 CC test/nvme/startup/startup.o 00:02:24.623 CC test/nvme/reserve/reserve.o 00:02:24.623 CXX test/cpp_headers/blob.o 00:02:24.890 LINK mem_callbacks 00:02:24.890 CC test/nvme/simple_copy/simple_copy.o 00:02:24.890 CC test/nvme/connect_stress/connect_stress.o 00:02:24.890 CXX test/cpp_headers/conf.o 00:02:24.890 LINK app_repeat 00:02:24.890 CXX test/cpp_headers/config.o 00:02:24.890 CXX test/cpp_headers/cpuset.o 00:02:24.890 CC test/nvme/boot_partition/boot_partition.o 00:02:24.890 LINK spdk_nvme_perf 00:02:24.890 CXX test/cpp_headers/crc16.o 00:02:24.890 CC test/nvme/compliance/nvme_compliance.o 00:02:24.890 CXX test/cpp_headers/crc32.o 00:02:24.890 LINK stub 00:02:24.890 CXX test/cpp_headers/crc64.o 00:02:24.890 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.890 CXX test/cpp_headers/dif.o 00:02:24.890 CC test/nvme/cuse/cuse.o 00:02:24.890 LINK reset 00:02:24.890 CC test/nvme/fdp/fdp.o 00:02:24.890 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.890 CXX test/cpp_headers/dma.o 00:02:24.890 CXX test/cpp_headers/endian.o 00:02:24.890 CXX test/cpp_headers/env_dpdk.o 00:02:24.890 CXX test/cpp_headers/env.o 00:02:24.890 CXX test/cpp_headers/event.o 00:02:24.890 CXX test/cpp_headers/fd_group.o 00:02:24.890 CXX test/cpp_headers/fd.o 00:02:24.890 CXX test/cpp_headers/file.o 00:02:24.890 CXX test/cpp_headers/ftl.o 00:02:25.155 CXX test/cpp_headers/gpt_spec.o 00:02:25.155 CXX test/cpp_headers/hexlify.o 00:02:25.155 CXX test/cpp_headers/histogram_data.o 00:02:25.155 LINK err_injection 00:02:25.155 LINK bdevperf 00:02:25.155 LINK scheduler 00:02:25.155 LINK spdk_nvme_identify 00:02:25.155 LINK spdk_top 00:02:25.155 LINK sgl 00:02:25.155 LINK startup 00:02:25.155 CXX test/cpp_headers/idxd.o 00:02:25.155 LINK nvme_dp 00:02:25.155 CXX test/cpp_headers/idxd_spec.o 00:02:25.155 LINK nvme_fuzz 00:02:25.155 LINK reserve 00:02:25.155 LINK connect_stress 00:02:25.155 CXX test/cpp_headers/init.o 00:02:25.155 LINK overhead 00:02:25.155 CXX test/cpp_headers/ioat.o 00:02:25.155 CXX test/cpp_headers/ioat_spec.o 00:02:25.155 LINK boot_partition 00:02:25.155 CXX test/cpp_headers/iscsi_spec.o 00:02:25.155 CXX test/cpp_headers/json.o 00:02:25.155 LINK pci_ut 00:02:25.155 CXX test/cpp_headers/jsonrpc.o 00:02:25.419 CXX test/cpp_headers/keyring.o 00:02:25.419 LINK simple_copy 00:02:25.419 CXX test/cpp_headers/keyring_module.o 00:02:25.419 LINK vhost_fuzz 00:02:25.419 CXX test/cpp_headers/likely.o 00:02:25.419 LINK spdk_bdev 00:02:25.419 LINK doorbell_aers 00:02:25.419 CXX test/cpp_headers/log.o 00:02:25.419 CXX test/cpp_headers/lvol.o 00:02:25.419 CXX test/cpp_headers/memory.o 00:02:25.419 CXX test/cpp_headers/mmio.o 00:02:25.419 LINK fused_ordering 00:02:25.419 CXX test/cpp_headers/nbd.o 00:02:25.419 CXX test/cpp_headers/notify.o 00:02:25.419 CXX test/cpp_headers/nvme.o 00:02:25.419 CXX test/cpp_headers/nvme_intel.o 00:02:25.419 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.419 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.419 CXX test/cpp_headers/nvme_spec.o 00:02:25.419 CXX test/cpp_headers/nvme_zns.o 00:02:25.419 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.419 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.419 CXX test/cpp_headers/nvmf.o 00:02:25.419 CXX test/cpp_headers/nvmf_spec.o 00:02:25.419 CXX test/cpp_headers/nvmf_transport.o 00:02:25.419 CXX test/cpp_headers/opal.o 00:02:25.419 CXX test/cpp_headers/opal_spec.o 00:02:25.419 CXX test/cpp_headers/pci_ids.o 00:02:25.419 CXX test/cpp_headers/pipe.o 00:02:25.419 CXX test/cpp_headers/queue.o 00:02:25.680 CXX test/cpp_headers/reduce.o 00:02:25.680 CXX test/cpp_headers/rpc.o 00:02:25.680 LINK nvme_compliance 00:02:25.680 CXX test/cpp_headers/scheduler.o 00:02:25.680 CXX test/cpp_headers/scsi.o 00:02:25.680 CXX test/cpp_headers/scsi_spec.o 00:02:25.680 CXX test/cpp_headers/sock.o 00:02:25.680 CXX test/cpp_headers/stdinc.o 00:02:25.680 CXX test/cpp_headers/string.o 00:02:25.680 CXX test/cpp_headers/thread.o 00:02:25.680 CXX test/cpp_headers/trace.o 00:02:25.680 CXX test/cpp_headers/trace_parser.o 00:02:25.680 LINK fdp 00:02:25.680 CXX test/cpp_headers/tree.o 00:02:25.680 CXX test/cpp_headers/ublk.o 00:02:25.680 CXX test/cpp_headers/util.o 00:02:25.680 CXX test/cpp_headers/uuid.o 00:02:25.680 CXX test/cpp_headers/version.o 00:02:25.680 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.680 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.680 CXX test/cpp_headers/vhost.o 00:02:25.680 CXX test/cpp_headers/vmd.o 00:02:25.680 CXX test/cpp_headers/xor.o 00:02:25.680 CXX test/cpp_headers/zipf.o 00:02:26.246 LINK memory_ut 00:02:26.504 LINK cuse 00:02:26.762 LINK iscsi_fuzz 00:02:29.291 LINK esnap 00:02:29.550 00:02:29.550 real 0m48.160s 00:02:29.550 user 10m9.736s 00:02:29.550 sys 2m27.255s 00:02:29.550 15:58:30 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:29.550 15:58:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.550 ************************************ 00:02:29.550 END TEST make 00:02:29.550 ************************************ 00:02:29.550 15:58:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.550 15:58:30 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:29.550 15:58:30 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:29.550 15:58:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.550 15:58:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.550 15:58:30 -- pm/common@45 -- $ pid=3196708 00:02:29.550 15:58:30 -- pm/common@52 -- $ sudo kill -TERM 3196708 00:02:29.550 15:58:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.550 15:58:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.550 15:58:30 -- pm/common@45 -- $ pid=3196709 00:02:29.550 15:58:30 -- pm/common@52 -- $ sudo kill -TERM 3196709 00:02:29.550 15:58:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.550 15:58:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.550 15:58:30 -- pm/common@45 -- $ pid=3196710 00:02:29.550 15:58:30 -- pm/common@52 -- $ sudo kill -TERM 3196710 00:02:29.550 15:58:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.550 15:58:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.550 15:58:30 -- pm/common@45 -- $ pid=3196707 00:02:29.550 15:58:30 -- pm/common@52 -- $ sudo kill -TERM 3196707 00:02:29.550 15:58:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.550 15:58:30 -- nvmf/common.sh@7 -- # uname -s 00:02:29.550 15:58:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.550 15:58:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.550 15:58:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.550 15:58:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.550 15:58:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.550 15:58:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.550 15:58:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.550 15:58:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.550 15:58:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.550 15:58:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.809 15:58:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:29.809 15:58:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:29.809 15:58:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.809 15:58:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.809 15:58:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.809 15:58:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:29.809 15:58:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.809 15:58:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.809 15:58:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.809 15:58:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.809 15:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.809 15:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.809 15:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.809 15:58:30 -- paths/export.sh@5 -- # export PATH 00:02:29.809 15:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.809 15:58:30 -- nvmf/common.sh@47 -- # : 0 00:02:29.809 15:58:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:29.809 15:58:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:29.809 15:58:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:29.809 15:58:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.809 15:58:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.809 15:58:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:29.809 15:58:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:29.809 15:58:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:29.809 15:58:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.809 15:58:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.809 15:58:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.809 15:58:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.809 15:58:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.809 15:58:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.809 15:58:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.809 15:58:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.809 15:58:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.809 15:58:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.809 15:58:30 -- spdk/autotest.sh@48 -- # udevadm_pid=3251447 00:02:29.810 15:58:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.810 15:58:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:29.810 15:58:30 -- pm/common@17 -- # local monitor 00:02:29.810 15:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.810 15:58:30 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3251448 00:02:29.810 15:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.810 15:58:30 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3251450 00:02:29.810 15:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.810 15:58:30 -- pm/common@21 -- # date +%s 00:02:29.810 15:58:30 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3251454 00:02:29.810 15:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.810 15:58:30 -- pm/common@21 -- # date +%s 00:02:29.810 15:58:30 -- pm/common@21 -- # date +%s 00:02:29.810 15:58:30 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3251457 00:02:29.810 15:58:30 -- pm/common@26 -- # sleep 1 00:02:29.810 15:58:30 -- pm/common@21 -- # date +%s 00:02:29.810 15:58:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713967110 00:02:29.810 15:58:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713967110 00:02:29.810 15:58:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713967110 00:02:29.810 15:58:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713967110 00:02:29.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713967110_collect-vmstat.pm.log 00:02:29.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713967110_collect-bmc-pm.bmc.pm.log 00:02:29.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713967110_collect-cpu-load.pm.log 00:02:29.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713967110_collect-cpu-temp.pm.log 00:02:30.751 15:58:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.751 15:58:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:30.751 15:58:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:30.751 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:02:30.751 15:58:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:30.751 15:58:31 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:30.751 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:02:30.751 15:58:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:30.751 15:58:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.751 15:58:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.751 15:58:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.751 15:58:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.751 15:58:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:30.751 15:58:31 -- common/autotest_common.sh@1441 -- # uname 00:02:30.751 15:58:31 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:30.751 15:58:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:30.751 15:58:31 -- common/autotest_common.sh@1461 -- # uname 00:02:30.751 15:58:31 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:30.751 15:58:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:30.751 15:58:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:30.751 15:58:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:30.751 15:58:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:30.751 15:58:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:30.751 --rc lcov_branch_coverage=1 00:02:30.751 --rc lcov_function_coverage=1 00:02:30.751 --rc genhtml_branch_coverage=1 00:02:30.751 --rc genhtml_function_coverage=1 00:02:30.751 --rc genhtml_legend=1 00:02:30.751 --rc geninfo_all_blocks=1 00:02:30.751 ' 00:02:30.751 15:58:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:30.751 --rc lcov_branch_coverage=1 00:02:30.751 --rc lcov_function_coverage=1 00:02:30.751 --rc genhtml_branch_coverage=1 00:02:30.751 --rc genhtml_function_coverage=1 00:02:30.751 --rc genhtml_legend=1 00:02:30.751 --rc geninfo_all_blocks=1 00:02:30.751 ' 00:02:30.751 15:58:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:30.751 --rc lcov_branch_coverage=1 00:02:30.751 --rc lcov_function_coverage=1 00:02:30.751 --rc genhtml_branch_coverage=1 00:02:30.751 --rc genhtml_function_coverage=1 00:02:30.751 --rc genhtml_legend=1 00:02:30.751 --rc geninfo_all_blocks=1 00:02:30.751 --no-external' 00:02:30.751 15:58:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:30.751 --rc lcov_branch_coverage=1 00:02:30.751 --rc lcov_function_coverage=1 00:02:30.751 --rc genhtml_branch_coverage=1 00:02:30.751 --rc genhtml_function_coverage=1 00:02:30.751 --rc genhtml_legend=1 00:02:30.751 --rc geninfo_all_blocks=1 00:02:30.751 --no-external' 00:02:30.751 15:58:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:30.751 lcov: LCOV version 1.14 00:02:30.751 15:58:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:40.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:40.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:40.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:40.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:44.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.028 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:56.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:56.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:56.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:56.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:56.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:56.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:04.430 15:59:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.430 15:59:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:04.430 15:59:04 -- common/autotest_common.sh@10 -- # set +x 00:03:04.430 15:59:04 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.430 15:59:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.430 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:04.430 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:04.430 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:04.430 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:04.430 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:04.430 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:04.430 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:04.430 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:04.688 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:04.688 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:04.688 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:04.688 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:04.688 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:04.688 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:04.688 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:04.688 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:04.688 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:04.688 15:59:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:04.688 15:59:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:04.688 15:59:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:04.688 15:59:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:04.688 15:59:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.688 15:59:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:04.688 15:59:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:04.688 15:59:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.688 15:59:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.688 15:59:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:04.688 15:59:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:04.688 15:59:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:04.688 15:59:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:04.688 15:59:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:04.688 15:59:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:04.688 No valid GPT data, bailing 00:03:04.688 15:59:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.688 15:59:05 -- scripts/common.sh@391 -- # pt= 00:03:04.688 15:59:05 -- scripts/common.sh@392 -- # return 1 00:03:04.688 15:59:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:04.688 1+0 records in 00:03:04.688 1+0 records out 00:03:04.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00232076 s, 452 MB/s 00:03:04.688 15:59:05 -- spdk/autotest.sh@118 -- # sync 00:03:04.688 15:59:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:04.688 15:59:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:04.688 15:59:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.593 15:59:07 -- spdk/autotest.sh@124 -- # uname -s 00:03:06.593 15:59:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:06.593 15:59:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:06.593 15:59:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.593 15:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.593 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:06.593 ************************************ 00:03:06.593 START TEST setup.sh 00:03:06.593 ************************************ 00:03:06.593 15:59:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:06.852 * Looking for test storage... 00:03:06.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.852 15:59:07 -- setup/test-setup.sh@10 -- # uname -s 00:03:06.852 15:59:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:06.852 15:59:07 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:06.852 15:59:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.852 15:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.852 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:06.852 ************************************ 00:03:06.852 START TEST acl 00:03:06.852 ************************************ 00:03:06.852 15:59:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:06.852 * Looking for test storage... 00:03:06.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.852 15:59:08 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:06.852 15:59:08 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:06.852 15:59:08 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:06.852 15:59:08 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:06.852 15:59:08 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:06.852 15:59:08 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:06.852 15:59:08 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:06.852 15:59:08 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.852 15:59:08 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:06.852 15:59:08 -- setup/acl.sh@12 -- # devs=() 00:03:06.852 15:59:08 -- setup/acl.sh@12 -- # declare -a devs 00:03:06.852 15:59:08 -- setup/acl.sh@13 -- # drivers=() 00:03:06.852 15:59:08 -- setup/acl.sh@13 -- # declare -A drivers 00:03:06.852 15:59:08 -- setup/acl.sh@51 -- # setup reset 00:03:06.852 15:59:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.852 15:59:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.228 15:59:09 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.228 15:59:09 -- setup/acl.sh@16 -- # local dev driver 00:03:08.228 15:59:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.228 15:59:09 -- setup/acl.sh@15 -- # setup output status 00:03:08.228 15:59:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.228 15:59:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.659 Hugepages 00:03:09.659 node hugesize free / total 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 00:03:09.659 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.659 15:59:10 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.659 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.659 15:59:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:09.659 15:59:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.660 15:59:10 -- setup/acl.sh@20 -- # continue 00:03:09.660 15:59:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.660 15:59:10 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.660 15:59:10 -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.660 15:59:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.660 15:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.660 15:59:10 -- common/autotest_common.sh@10 -- # set +x 00:03:09.660 ************************************ 00:03:09.660 START TEST denied 00:03:09.660 ************************************ 00:03:09.660 15:59:10 -- common/autotest_common.sh@1111 -- # denied 00:03:09.660 15:59:10 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:09.660 15:59:10 -- setup/acl.sh@38 -- # setup output config 00:03:09.660 15:59:10 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:09.660 15:59:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.660 15:59:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.033 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:11.033 15:59:12 -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:11.033 15:59:12 -- setup/acl.sh@28 -- # local dev driver 00:03:11.033 15:59:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.033 15:59:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:11.033 15:59:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:11.033 15:59:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.033 15:59:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.033 15:59:12 -- setup/acl.sh@41 -- # setup reset 00:03:11.033 15:59:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.033 15:59:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.591 00:03:13.591 real 0m3.405s 00:03:13.591 user 0m0.990s 00:03:13.591 sys 0m1.580s 00:03:13.591 15:59:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:13.591 15:59:14 -- common/autotest_common.sh@10 -- # set +x 00:03:13.591 ************************************ 00:03:13.591 END TEST denied 00:03:13.591 ************************************ 00:03:13.591 15:59:14 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.591 15:59:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.591 15:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.591 15:59:14 -- common/autotest_common.sh@10 -- # set +x 00:03:13.591 ************************************ 00:03:13.591 START TEST allowed 00:03:13.591 ************************************ 00:03:13.591 15:59:14 -- common/autotest_common.sh@1111 -- # allowed 00:03:13.591 15:59:14 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:13.591 15:59:14 -- setup/acl.sh@45 -- # setup output config 00:03:13.591 15:59:14 -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:13.591 15:59:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.591 15:59:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.490 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.490 15:59:16 -- setup/acl.sh@47 -- # verify 00:03:15.490 15:59:16 -- setup/acl.sh@28 -- # local dev driver 00:03:15.490 15:59:16 -- setup/acl.sh@48 -- # setup reset 00:03:15.490 15:59:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.490 15:59:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.862 00:03:16.862 real 0m3.660s 00:03:16.862 user 0m0.978s 00:03:16.862 sys 0m1.667s 00:03:16.862 15:59:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.862 15:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:16.862 ************************************ 00:03:16.862 END TEST allowed 00:03:16.862 ************************************ 00:03:16.862 00:03:16.862 real 0m10.027s 00:03:16.862 user 0m3.146s 00:03:16.862 sys 0m5.091s 00:03:16.862 15:59:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.862 15:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:16.862 ************************************ 00:03:16.862 END TEST acl 00:03:16.862 ************************************ 00:03:16.862 15:59:18 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:16.862 15:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.862 15:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.862 15:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:17.121 ************************************ 00:03:17.121 START TEST hugepages 00:03:17.121 ************************************ 00:03:17.121 15:59:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.121 * Looking for test storage... 00:03:17.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.121 15:59:18 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:17.121 15:59:18 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:17.121 15:59:18 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:17.121 15:59:18 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:17.121 15:59:18 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:17.121 15:59:18 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:17.121 15:59:18 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:17.121 15:59:18 -- setup/common.sh@18 -- # local node= 00:03:17.121 15:59:18 -- setup/common.sh@19 -- # local var val 00:03:17.121 15:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.122 15:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.122 15:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.122 15:59:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.122 15:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.122 15:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 44936092 kB' 'MemAvailable: 45905620 kB' 'Buffers: 1308 kB' 'Cached: 13940544 kB' 'SwapCached: 0 kB' 'Active: 13995744 kB' 'Inactive: 545320 kB' 'Active(anon): 13325512 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602484 kB' 'Mapped: 179140 kB' 'Shmem: 12726300 kB' 'KReclaimable: 419288 kB' 'Slab: 799804 kB' 'SReclaimable: 419288 kB' 'SUnreclaim: 380516 kB' 'KernelStack: 13168 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39142796 kB' 'Committed_AS: 14540068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197200 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.122 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.122 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # continue 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.123 15:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.123 15:59:18 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.123 15:59:18 -- setup/common.sh@33 -- # echo 2048 00:03:17.123 15:59:18 -- setup/common.sh@33 -- # return 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:17.123 15:59:18 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:17.123 15:59:18 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:17.123 15:59:18 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:17.123 15:59:18 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:17.123 15:59:18 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:17.123 15:59:18 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:17.123 15:59:18 -- setup/hugepages.sh@207 -- # get_nodes 00:03:17.123 15:59:18 -- setup/hugepages.sh@27 -- # local node 00:03:17.123 15:59:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.123 15:59:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:17.123 15:59:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.123 15:59:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.123 15:59:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.123 15:59:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.123 15:59:18 -- setup/hugepages.sh@208 -- # clear_hp 00:03:17.123 15:59:18 -- setup/hugepages.sh@37 -- # local node hp 00:03:17.123 15:59:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.123 15:59:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.123 15:59:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.123 15:59:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.123 15:59:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.123 15:59:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.123 15:59:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:17.123 15:59:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.123 15:59:18 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:17.123 15:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.123 15:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.123 15:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:17.123 ************************************ 00:03:17.123 START TEST default_setup 00:03:17.123 ************************************ 00:03:17.123 15:59:18 -- common/autotest_common.sh@1111 -- # default_setup 00:03:17.123 15:59:18 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.123 15:59:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.123 15:59:18 -- setup/hugepages.sh@51 -- # shift 00:03:17.123 15:59:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.123 15:59:18 -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.123 15:59:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.123 15:59:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.123 15:59:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.123 15:59:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.123 15:59:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.123 15:59:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.123 15:59:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.123 15:59:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.123 15:59:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.123 15:59:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.123 15:59:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.123 15:59:18 -- setup/hugepages.sh@73 -- # return 0 00:03:17.123 15:59:18 -- setup/hugepages.sh@137 -- # setup output 00:03:17.123 15:59:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.123 15:59:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.497 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:18.497 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:18.497 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.431 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.431 15:59:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:19.431 15:59:20 -- setup/hugepages.sh@89 -- # local node 00:03:19.431 15:59:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.431 15:59:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.431 15:59:20 -- setup/hugepages.sh@92 -- # local surp 00:03:19.431 15:59:20 -- setup/hugepages.sh@93 -- # local resv 00:03:19.431 15:59:20 -- setup/hugepages.sh@94 -- # local anon 00:03:19.431 15:59:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.431 15:59:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.431 15:59:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.431 15:59:20 -- setup/common.sh@18 -- # local node= 00:03:19.432 15:59:20 -- setup/common.sh@19 -- # local var val 00:03:19.432 15:59:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.432 15:59:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.432 15:59:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.432 15:59:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.432 15:59:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.432 15:59:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47054176 kB' 'MemAvailable: 48023736 kB' 'Buffers: 1308 kB' 'Cached: 13940632 kB' 'SwapCached: 0 kB' 'Active: 14014616 kB' 'Inactive: 545320 kB' 'Active(anon): 13344384 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621444 kB' 'Mapped: 179264 kB' 'Shmem: 12726388 kB' 'KReclaimable: 419320 kB' 'Slab: 799104 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379784 kB' 'KernelStack: 13248 kB' 'PageTables: 10308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14556860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197456 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.432 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.433 15:59:20 -- setup/common.sh@33 -- # echo 0 00:03:19.433 15:59:20 -- setup/common.sh@33 -- # return 0 00:03:19.433 15:59:20 -- setup/hugepages.sh@97 -- # anon=0 00:03:19.433 15:59:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.433 15:59:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.433 15:59:20 -- setup/common.sh@18 -- # local node= 00:03:19.433 15:59:20 -- setup/common.sh@19 -- # local var val 00:03:19.433 15:59:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.433 15:59:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.433 15:59:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.433 15:59:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.433 15:59:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.433 15:59:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47054456 kB' 'MemAvailable: 48024016 kB' 'Buffers: 1308 kB' 'Cached: 13940632 kB' 'SwapCached: 0 kB' 'Active: 14013752 kB' 'Inactive: 545320 kB' 'Active(anon): 13343520 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620472 kB' 'Mapped: 179256 kB' 'Shmem: 12726388 kB' 'KReclaimable: 419320 kB' 'Slab: 799096 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379776 kB' 'KernelStack: 12912 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14556872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197296 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.433 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.433 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.434 15:59:20 -- setup/common.sh@33 -- # echo 0 00:03:19.434 15:59:20 -- setup/common.sh@33 -- # return 0 00:03:19.434 15:59:20 -- setup/hugepages.sh@99 -- # surp=0 00:03:19.434 15:59:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.434 15:59:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.434 15:59:20 -- setup/common.sh@18 -- # local node= 00:03:19.434 15:59:20 -- setup/common.sh@19 -- # local var val 00:03:19.434 15:59:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.434 15:59:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.434 15:59:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.434 15:59:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.434 15:59:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.434 15:59:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47053984 kB' 'MemAvailable: 48023544 kB' 'Buffers: 1308 kB' 'Cached: 13940648 kB' 'SwapCached: 0 kB' 'Active: 14012936 kB' 'Inactive: 545320 kB' 'Active(anon): 13342704 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619640 kB' 'Mapped: 179296 kB' 'Shmem: 12726404 kB' 'KReclaimable: 419320 kB' 'Slab: 799092 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379772 kB' 'KernelStack: 13024 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14556888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.434 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.434 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.435 15:59:20 -- setup/common.sh@33 -- # echo 0 00:03:19.435 15:59:20 -- setup/common.sh@33 -- # return 0 00:03:19.435 15:59:20 -- setup/hugepages.sh@100 -- # resv=0 00:03:19.435 15:59:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.435 nr_hugepages=1024 00:03:19.435 15:59:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.435 resv_hugepages=0 00:03:19.435 15:59:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.435 surplus_hugepages=0 00:03:19.435 15:59:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.435 anon_hugepages=0 00:03:19.435 15:59:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.435 15:59:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.435 15:59:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.435 15:59:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.435 15:59:20 -- setup/common.sh@18 -- # local node= 00:03:19.435 15:59:20 -- setup/common.sh@19 -- # local var val 00:03:19.435 15:59:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.435 15:59:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.435 15:59:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.435 15:59:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.435 15:59:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.435 15:59:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47053732 kB' 'MemAvailable: 48023292 kB' 'Buffers: 1308 kB' 'Cached: 13940648 kB' 'SwapCached: 0 kB' 'Active: 14012844 kB' 'Inactive: 545320 kB' 'Active(anon): 13342612 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619568 kB' 'Mapped: 179220 kB' 'Shmem: 12726404 kB' 'KReclaimable: 419320 kB' 'Slab: 799096 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379776 kB' 'KernelStack: 13040 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14556900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.435 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.435 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.436 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.436 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.437 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.437 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.437 15:59:20 -- setup/common.sh@33 -- # echo 1024 00:03:19.437 15:59:20 -- setup/common.sh@33 -- # return 0 00:03:19.437 15:59:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.437 15:59:20 -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.437 15:59:20 -- setup/hugepages.sh@27 -- # local node 00:03:19.437 15:59:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.437 15:59:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.437 15:59:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.437 15:59:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.695 15:59:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.695 15:59:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.695 15:59:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.695 15:59:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.695 15:59:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.695 15:59:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.695 15:59:20 -- setup/common.sh@18 -- # local node=0 00:03:19.695 15:59:20 -- setup/common.sh@19 -- # local var val 00:03:19.695 15:59:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.695 15:59:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.695 15:59:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.695 15:59:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.695 15:59:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.695 15:59:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.695 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.695 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 22684400 kB' 'MemUsed: 10192544 kB' 'SwapCached: 0 kB' 'Active: 6828672 kB' 'Inactive: 347252 kB' 'Active(anon): 6405332 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992164 kB' 'Mapped: 75884 kB' 'AnonPages: 187036 kB' 'Shmem: 6221572 kB' 'KernelStack: 7704 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174924 kB' 'Slab: 369620 kB' 'SReclaimable: 174924 kB' 'SUnreclaim: 194696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # continue 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.696 15:59:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.696 15:59:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.696 15:59:20 -- setup/common.sh@33 -- # echo 0 00:03:19.696 15:59:20 -- setup/common.sh@33 -- # return 0 00:03:19.696 15:59:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.696 15:59:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.696 15:59:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.696 15:59:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.696 15:59:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:19.696 node0=1024 expecting 1024 00:03:19.696 15:59:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.696 00:03:19.696 real 0m2.408s 00:03:19.696 user 0m0.653s 00:03:19.696 sys 0m0.798s 00:03:19.696 15:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:19.696 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:03:19.696 ************************************ 00:03:19.696 END TEST default_setup 00:03:19.696 ************************************ 00:03:19.697 15:59:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:19.697 15:59:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.697 15:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.697 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:03:19.697 ************************************ 00:03:19.697 START TEST per_node_1G_alloc 00:03:19.697 ************************************ 00:03:19.697 15:59:20 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:19.697 15:59:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:19.697 15:59:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:19.697 15:59:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:19.697 15:59:20 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:19.697 15:59:20 -- setup/hugepages.sh@51 -- # shift 00:03:19.697 15:59:20 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:19.697 15:59:20 -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.697 15:59:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.697 15:59:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:19.697 15:59:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:19.697 15:59:20 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:19.697 15:59:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.697 15:59:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:19.697 15:59:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.697 15:59:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.697 15:59:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.697 15:59:20 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:19.697 15:59:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.697 15:59:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:19.697 15:59:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.697 15:59:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:19.697 15:59:20 -- setup/hugepages.sh@73 -- # return 0 00:03:19.697 15:59:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:19.697 15:59:20 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:19.697 15:59:20 -- setup/hugepages.sh@146 -- # setup output 00:03:19.697 15:59:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.697 15:59:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.073 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.073 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.073 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.073 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.073 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.073 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.073 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.073 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.073 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.073 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.073 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.073 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.073 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.073 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.073 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.073 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.073 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.073 15:59:22 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:21.073 15:59:22 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:21.073 15:59:22 -- setup/hugepages.sh@89 -- # local node 00:03:21.073 15:59:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.073 15:59:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.073 15:59:22 -- setup/hugepages.sh@92 -- # local surp 00:03:21.073 15:59:22 -- setup/hugepages.sh@93 -- # local resv 00:03:21.073 15:59:22 -- setup/hugepages.sh@94 -- # local anon 00:03:21.073 15:59:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.073 15:59:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.073 15:59:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.073 15:59:22 -- setup/common.sh@18 -- # local node= 00:03:21.073 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.073 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.073 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.073 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.073 15:59:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.073 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.073 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47038776 kB' 'MemAvailable: 48008344 kB' 'Buffers: 1308 kB' 'Cached: 13940716 kB' 'SwapCached: 0 kB' 'Active: 14009868 kB' 'Inactive: 545320 kB' 'Active(anon): 13339636 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616272 kB' 'Mapped: 179264 kB' 'Shmem: 12726472 kB' 'KReclaimable: 419328 kB' 'Slab: 799264 kB' 'SReclaimable: 419328 kB' 'SUnreclaim: 379936 kB' 'KernelStack: 13008 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197424 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.073 15:59:22 -- setup/common.sh@33 -- # echo 0 00:03:21.073 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.073 15:59:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.073 15:59:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.073 15:59:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.073 15:59:22 -- setup/common.sh@18 -- # local node= 00:03:21.073 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.073 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.073 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.073 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.073 15:59:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.073 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.073 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47039884 kB' 'MemAvailable: 48009452 kB' 'Buffers: 1308 kB' 'Cached: 13940720 kB' 'SwapCached: 0 kB' 'Active: 14009856 kB' 'Inactive: 545320 kB' 'Active(anon): 13339624 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616348 kB' 'Mapped: 179320 kB' 'Shmem: 12726476 kB' 'KReclaimable: 419328 kB' 'Slab: 799300 kB' 'SReclaimable: 419328 kB' 'SUnreclaim: 379972 kB' 'KernelStack: 13040 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197376 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.073 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.073 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.074 15:59:22 -- setup/common.sh@33 -- # echo 0 00:03:21.074 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.074 15:59:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.074 15:59:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.074 15:59:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.074 15:59:22 -- setup/common.sh@18 -- # local node= 00:03:21.074 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.074 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.074 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.074 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.074 15:59:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.074 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.074 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47040092 kB' 'MemAvailable: 48009660 kB' 'Buffers: 1308 kB' 'Cached: 13940724 kB' 'SwapCached: 0 kB' 'Active: 14009892 kB' 'Inactive: 545320 kB' 'Active(anon): 13339660 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616324 kB' 'Mapped: 179244 kB' 'Shmem: 12726480 kB' 'KReclaimable: 419328 kB' 'Slab: 799316 kB' 'SReclaimable: 419328 kB' 'SUnreclaim: 379988 kB' 'KernelStack: 13040 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14553660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197376 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.074 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.074 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.075 15:59:22 -- setup/common.sh@33 -- # echo 0 00:03:21.075 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.075 15:59:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.075 15:59:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.075 nr_hugepages=1024 00:03:21.075 15:59:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.075 resv_hugepages=0 00:03:21.075 15:59:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.075 surplus_hugepages=0 00:03:21.075 15:59:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.075 anon_hugepages=0 00:03:21.075 15:59:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.075 15:59:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.075 15:59:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.075 15:59:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.075 15:59:22 -- setup/common.sh@18 -- # local node= 00:03:21.075 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.075 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.075 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.075 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.075 15:59:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.075 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.075 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47040092 kB' 'MemAvailable: 48009660 kB' 'Buffers: 1308 kB' 'Cached: 13940748 kB' 'SwapCached: 0 kB' 'Active: 14013252 kB' 'Inactive: 545320 kB' 'Active(anon): 13343020 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619688 kB' 'Mapped: 179680 kB' 'Shmem: 12726504 kB' 'KReclaimable: 419328 kB' 'Slab: 799316 kB' 'SReclaimable: 419328 kB' 'SUnreclaim: 379988 kB' 'KernelStack: 13040 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14556712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197344 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.075 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.075 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.076 15:59:22 -- setup/common.sh@33 -- # echo 1024 00:03:21.076 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.076 15:59:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.076 15:59:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.076 15:59:22 -- setup/hugepages.sh@27 -- # local node 00:03:21.076 15:59:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.076 15:59:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.076 15:59:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.076 15:59:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.076 15:59:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.076 15:59:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.076 15:59:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.076 15:59:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.076 15:59:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.076 15:59:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.076 15:59:22 -- setup/common.sh@18 -- # local node=0 00:03:21.076 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.076 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.076 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.076 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.076 15:59:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.076 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.076 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 23724424 kB' 'MemUsed: 9152520 kB' 'SwapCached: 0 kB' 'Active: 6827644 kB' 'Inactive: 347252 kB' 'Active(anon): 6404304 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992176 kB' 'Mapped: 76036 kB' 'AnonPages: 185836 kB' 'Shmem: 6221584 kB' 'KernelStack: 7720 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174924 kB' 'Slab: 369652 kB' 'SReclaimable: 174924 kB' 'SUnreclaim: 194728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@33 -- # echo 0 00:03:21.076 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.076 15:59:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.076 15:59:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.076 15:59:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.076 15:59:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.076 15:59:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.076 15:59:22 -- setup/common.sh@18 -- # local node=1 00:03:21.076 15:59:22 -- setup/common.sh@19 -- # local var val 00:03:21.076 15:59:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.076 15:59:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.076 15:59:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.076 15:59:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.076 15:59:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.076 15:59:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825744 kB' 'MemFree: 23313324 kB' 'MemUsed: 9512420 kB' 'SwapCached: 0 kB' 'Active: 7188104 kB' 'Inactive: 198068 kB' 'Active(anon): 6941212 kB' 'Inactive(anon): 0 kB' 'Active(file): 246892 kB' 'Inactive(file): 198068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6949908 kB' 'Mapped: 104124 kB' 'AnonPages: 436460 kB' 'Shmem: 6504948 kB' 'KernelStack: 5336 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 244404 kB' 'Slab: 429664 kB' 'SReclaimable: 244404 kB' 'SUnreclaim: 185260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.076 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.076 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # continue 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.077 15:59:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.077 15:59:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.077 15:59:22 -- setup/common.sh@33 -- # echo 0 00:03:21.077 15:59:22 -- setup/common.sh@33 -- # return 0 00:03:21.077 15:59:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.077 15:59:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.077 15:59:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.077 15:59:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.077 15:59:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.077 node0=512 expecting 512 00:03:21.077 15:59:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.077 15:59:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.077 15:59:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.077 15:59:22 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.077 node1=512 expecting 512 00:03:21.077 15:59:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.077 00:03:21.077 real 0m1.470s 00:03:21.077 user 0m0.540s 00:03:21.077 sys 0m0.890s 00:03:21.077 15:59:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.077 15:59:22 -- common/autotest_common.sh@10 -- # set +x 00:03:21.077 ************************************ 00:03:21.077 END TEST per_node_1G_alloc 00:03:21.077 ************************************ 00:03:21.077 15:59:22 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:21.077 15:59:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.077 15:59:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.077 15:59:22 -- common/autotest_common.sh@10 -- # set +x 00:03:21.336 ************************************ 00:03:21.336 START TEST even_2G_alloc 00:03:21.336 ************************************ 00:03:21.336 15:59:22 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:21.336 15:59:22 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:21.336 15:59:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.336 15:59:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.336 15:59:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.336 15:59:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.336 15:59:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.336 15:59:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.336 15:59:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.336 15:59:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.336 15:59:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.336 15:59:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.336 15:59:22 -- setup/hugepages.sh@83 -- # : 512 00:03:21.336 15:59:22 -- setup/hugepages.sh@84 -- # : 1 00:03:21.336 15:59:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.336 15:59:22 -- setup/hugepages.sh@83 -- # : 0 00:03:21.336 15:59:22 -- setup/hugepages.sh@84 -- # : 0 00:03:21.336 15:59:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.336 15:59:22 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:21.336 15:59:22 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:21.336 15:59:22 -- setup/hugepages.sh@153 -- # setup output 00:03:21.336 15:59:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.336 15:59:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.268 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.268 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.268 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.268 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.268 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.268 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.268 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.268 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.268 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.268 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.268 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.268 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.268 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.268 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.268 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.268 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.268 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.531 15:59:23 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:22.531 15:59:23 -- setup/hugepages.sh@89 -- # local node 00:03:22.531 15:59:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.531 15:59:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.531 15:59:23 -- setup/hugepages.sh@92 -- # local surp 00:03:22.531 15:59:23 -- setup/hugepages.sh@93 -- # local resv 00:03:22.531 15:59:23 -- setup/hugepages.sh@94 -- # local anon 00:03:22.531 15:59:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.531 15:59:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.531 15:59:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.531 15:59:23 -- setup/common.sh@18 -- # local node= 00:03:22.531 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.531 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.531 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.531 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.531 15:59:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.531 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.531 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.531 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.531 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47033680 kB' 'MemAvailable: 48003240 kB' 'Buffers: 1308 kB' 'Cached: 13940824 kB' 'SwapCached: 0 kB' 'Active: 14010352 kB' 'Inactive: 545320 kB' 'Active(anon): 13340120 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616912 kB' 'Mapped: 179268 kB' 'Shmem: 12726580 kB' 'KReclaimable: 419320 kB' 'Slab: 799376 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 380056 kB' 'KernelStack: 12992 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.532 15:59:23 -- setup/common.sh@33 -- # echo 0 00:03:22.532 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.532 15:59:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.532 15:59:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.532 15:59:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.532 15:59:23 -- setup/common.sh@18 -- # local node= 00:03:22.532 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.532 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.532 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.532 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.532 15:59:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.532 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.532 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47033680 kB' 'MemAvailable: 48003240 kB' 'Buffers: 1308 kB' 'Cached: 13940824 kB' 'SwapCached: 0 kB' 'Active: 14010880 kB' 'Inactive: 545320 kB' 'Active(anon): 13340648 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617416 kB' 'Mapped: 179284 kB' 'Shmem: 12726580 kB' 'KReclaimable: 419320 kB' 'Slab: 799376 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 380056 kB' 'KernelStack: 13024 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197296 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.532 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.532 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.533 15:59:23 -- setup/common.sh@33 -- # echo 0 00:03:22.533 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.533 15:59:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.533 15:59:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.533 15:59:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.533 15:59:23 -- setup/common.sh@18 -- # local node= 00:03:22.533 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.533 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.533 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.533 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.533 15:59:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.533 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.533 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47033428 kB' 'MemAvailable: 48002988 kB' 'Buffers: 1308 kB' 'Cached: 13940836 kB' 'SwapCached: 0 kB' 'Active: 14009968 kB' 'Inactive: 545320 kB' 'Active(anon): 13339736 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616476 kB' 'Mapped: 179276 kB' 'Shmem: 12726592 kB' 'KReclaimable: 419320 kB' 'Slab: 799364 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 380044 kB' 'KernelStack: 13040 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197296 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.533 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.533 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.534 15:59:23 -- setup/common.sh@33 -- # echo 0 00:03:22.534 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.534 15:59:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.534 15:59:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.534 nr_hugepages=1024 00:03:22.534 15:59:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.534 resv_hugepages=0 00:03:22.534 15:59:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.534 surplus_hugepages=0 00:03:22.534 15:59:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.534 anon_hugepages=0 00:03:22.534 15:59:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.534 15:59:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.534 15:59:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.534 15:59:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.534 15:59:23 -- setup/common.sh@18 -- # local node= 00:03:22.534 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.534 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.534 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.534 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.534 15:59:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.534 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.534 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47033428 kB' 'MemAvailable: 48002988 kB' 'Buffers: 1308 kB' 'Cached: 13940852 kB' 'SwapCached: 0 kB' 'Active: 14010340 kB' 'Inactive: 545320 kB' 'Active(anon): 13340108 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616896 kB' 'Mapped: 179276 kB' 'Shmem: 12726608 kB' 'KReclaimable: 419320 kB' 'Slab: 799356 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 380036 kB' 'KernelStack: 13056 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14552648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.534 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.534 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.535 15:59:23 -- setup/common.sh@33 -- # echo 1024 00:03:22.535 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.535 15:59:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.535 15:59:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.535 15:59:23 -- setup/hugepages.sh@27 -- # local node 00:03:22.535 15:59:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.535 15:59:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.535 15:59:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.535 15:59:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.535 15:59:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.535 15:59:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.535 15:59:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.535 15:59:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.535 15:59:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.535 15:59:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.535 15:59:23 -- setup/common.sh@18 -- # local node=0 00:03:22.535 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.535 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.535 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.535 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.535 15:59:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.535 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.535 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 23700080 kB' 'MemUsed: 9176864 kB' 'SwapCached: 0 kB' 'Active: 6829812 kB' 'Inactive: 347252 kB' 'Active(anon): 6406472 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992236 kB' 'Mapped: 75888 kB' 'AnonPages: 188116 kB' 'Shmem: 6221644 kB' 'KernelStack: 7608 kB' 'PageTables: 4984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174924 kB' 'Slab: 369596 kB' 'SReclaimable: 174924 kB' 'SUnreclaim: 194672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.535 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.535 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.535 15:59:23 -- setup/common.sh@33 -- # echo 0 00:03:22.536 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.536 15:59:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.536 15:59:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.536 15:59:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.536 15:59:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.536 15:59:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.536 15:59:23 -- setup/common.sh@18 -- # local node=1 00:03:22.536 15:59:23 -- setup/common.sh@19 -- # local var val 00:03:22.536 15:59:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.536 15:59:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.536 15:59:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.536 15:59:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.536 15:59:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.536 15:59:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825744 kB' 'MemFree: 23333096 kB' 'MemUsed: 9492648 kB' 'SwapCached: 0 kB' 'Active: 7180536 kB' 'Inactive: 198068 kB' 'Active(anon): 6933644 kB' 'Inactive(anon): 0 kB' 'Active(file): 246892 kB' 'Inactive(file): 198068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6949940 kB' 'Mapped: 103388 kB' 'AnonPages: 428784 kB' 'Shmem: 6504980 kB' 'KernelStack: 5448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 244396 kB' 'Slab: 429760 kB' 'SReclaimable: 244396 kB' 'SUnreclaim: 185364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # continue 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.536 15:59:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.536 15:59:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.536 15:59:23 -- setup/common.sh@33 -- # echo 0 00:03:22.536 15:59:23 -- setup/common.sh@33 -- # return 0 00:03:22.536 15:59:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.536 15:59:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.536 15:59:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.536 15:59:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.536 15:59:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.536 node0=512 expecting 512 00:03:22.536 15:59:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.536 15:59:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.536 15:59:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.536 15:59:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.536 node1=512 expecting 512 00:03:22.536 15:59:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.536 00:03:22.536 real 0m1.338s 00:03:22.536 user 0m0.539s 00:03:22.536 sys 0m0.760s 00:03:22.536 15:59:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.536 15:59:23 -- common/autotest_common.sh@10 -- # set +x 00:03:22.536 ************************************ 00:03:22.536 END TEST even_2G_alloc 00:03:22.536 ************************************ 00:03:22.536 15:59:23 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:22.536 15:59:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.536 15:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.536 15:59:23 -- common/autotest_common.sh@10 -- # set +x 00:03:22.795 ************************************ 00:03:22.795 START TEST odd_alloc 00:03:22.795 ************************************ 00:03:22.795 15:59:23 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:22.795 15:59:23 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.795 15:59:23 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.795 15:59:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.795 15:59:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.795 15:59:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.795 15:59:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.795 15:59:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.795 15:59:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.795 15:59:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.795 15:59:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.795 15:59:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.795 15:59:23 -- setup/hugepages.sh@83 -- # : 513 00:03:22.795 15:59:23 -- setup/hugepages.sh@84 -- # : 1 00:03:22.795 15:59:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:22.795 15:59:23 -- setup/hugepages.sh@83 -- # : 0 00:03:22.795 15:59:23 -- setup/hugepages.sh@84 -- # : 0 00:03:22.795 15:59:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.795 15:59:23 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.795 15:59:23 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.795 15:59:23 -- setup/hugepages.sh@160 -- # setup output 00:03:22.795 15:59:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.795 15:59:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.727 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.727 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.727 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.727 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.727 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.727 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.727 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.727 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.727 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.727 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.727 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.987 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.987 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.987 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.987 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.987 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.987 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.987 15:59:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:23.987 15:59:25 -- setup/hugepages.sh@89 -- # local node 00:03:23.987 15:59:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.987 15:59:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.987 15:59:25 -- setup/hugepages.sh@92 -- # local surp 00:03:23.987 15:59:25 -- setup/hugepages.sh@93 -- # local resv 00:03:23.987 15:59:25 -- setup/hugepages.sh@94 -- # local anon 00:03:23.987 15:59:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.987 15:59:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.987 15:59:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.987 15:59:25 -- setup/common.sh@18 -- # local node= 00:03:23.987 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:23.987 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.987 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.987 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.987 15:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.987 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.987 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47062156 kB' 'MemAvailable: 48031716 kB' 'Buffers: 1308 kB' 'Cached: 13940916 kB' 'SwapCached: 0 kB' 'Active: 14004344 kB' 'Inactive: 545320 kB' 'Active(anon): 13334112 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610648 kB' 'Mapped: 178276 kB' 'Shmem: 12726672 kB' 'KReclaimable: 419320 kB' 'Slab: 799012 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379692 kB' 'KernelStack: 13136 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40190348 kB' 'Committed_AS: 14514232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197232 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.987 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.988 15:59:25 -- setup/common.sh@33 -- # echo 0 00:03:23.988 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:23.988 15:59:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.988 15:59:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.988 15:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.988 15:59:25 -- setup/common.sh@18 -- # local node= 00:03:23.988 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:23.988 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.988 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.988 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.988 15:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.988 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.988 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47062256 kB' 'MemAvailable: 48031816 kB' 'Buffers: 1308 kB' 'Cached: 13940920 kB' 'SwapCached: 0 kB' 'Active: 14004284 kB' 'Inactive: 545320 kB' 'Active(anon): 13334052 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610564 kB' 'Mapped: 178284 kB' 'Shmem: 12726676 kB' 'KReclaimable: 419320 kB' 'Slab: 799008 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379688 kB' 'KernelStack: 12864 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40190348 kB' 'Committed_AS: 14514244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.988 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 15:59:25 -- setup/common.sh@33 -- # echo 0 00:03:23.989 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:23.989 15:59:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.989 15:59:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.989 15:59:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.989 15:59:25 -- setup/common.sh@18 -- # local node= 00:03:23.989 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:23.989 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.989 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.989 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.989 15:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.989 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.989 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47063044 kB' 'MemAvailable: 48032604 kB' 'Buffers: 1308 kB' 'Cached: 13940924 kB' 'SwapCached: 0 kB' 'Active: 14002636 kB' 'Inactive: 545320 kB' 'Active(anon): 13332404 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608944 kB' 'Mapped: 178244 kB' 'Shmem: 12726680 kB' 'KReclaimable: 419320 kB' 'Slab: 798976 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379656 kB' 'KernelStack: 12960 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40190348 kB' 'Committed_AS: 14514260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.989 15:59:25 -- setup/common.sh@33 -- # echo 0 00:03:23.989 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:23.989 15:59:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.989 15:59:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:23.989 nr_hugepages=1025 00:03:23.989 15:59:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.989 resv_hugepages=0 00:03:23.989 15:59:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.989 surplus_hugepages=0 00:03:23.990 15:59:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.990 anon_hugepages=0 00:03:23.990 15:59:25 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.990 15:59:25 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:23.990 15:59:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.990 15:59:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.990 15:59:25 -- setup/common.sh@18 -- # local node= 00:03:23.990 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:23.990 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.990 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.990 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.990 15:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.990 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.990 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47063316 kB' 'MemAvailable: 48032876 kB' 'Buffers: 1308 kB' 'Cached: 13940948 kB' 'SwapCached: 0 kB' 'Active: 14002552 kB' 'Inactive: 545320 kB' 'Active(anon): 13332320 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608880 kB' 'Mapped: 178244 kB' 'Shmem: 12726704 kB' 'KReclaimable: 419320 kB' 'Slab: 798972 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379652 kB' 'KernelStack: 12944 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40190348 kB' 'Committed_AS: 14514272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.990 15:59:25 -- setup/common.sh@33 -- # echo 1025 00:03:23.990 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:23.990 15:59:25 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.990 15:59:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.990 15:59:25 -- setup/hugepages.sh@27 -- # local node 00:03:23.990 15:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.990 15:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.990 15:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.990 15:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:23.990 15:59:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.990 15:59:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.990 15:59:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.990 15:59:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.990 15:59:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.990 15:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.990 15:59:25 -- setup/common.sh@18 -- # local node=0 00:03:23.990 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:23.990 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.990 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.990 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.990 15:59:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.990 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.990 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 23703560 kB' 'MemUsed: 9173384 kB' 'SwapCached: 0 kB' 'Active: 6829124 kB' 'Inactive: 347252 kB' 'Active(anon): 6405784 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992328 kB' 'Mapped: 74828 kB' 'AnonPages: 187252 kB' 'Shmem: 6221736 kB' 'KernelStack: 7656 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174924 kB' 'Slab: 369532 kB' 'SReclaimable: 174924 kB' 'SUnreclaim: 194608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # continue 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.249 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.249 15:59:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.249 15:59:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.249 15:59:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.249 15:59:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.249 15:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.249 15:59:25 -- setup/common.sh@18 -- # local node=1 00:03:24.249 15:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.249 15:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.249 15:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.249 15:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.249 15:59:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.249 15:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.249 15:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825744 kB' 'MemFree: 23359504 kB' 'MemUsed: 9466240 kB' 'SwapCached: 0 kB' 'Active: 7173460 kB' 'Inactive: 198068 kB' 'Active(anon): 6926568 kB' 'Inactive(anon): 0 kB' 'Active(file): 246892 kB' 'Inactive(file): 198068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6949944 kB' 'Mapped: 103416 kB' 'AnonPages: 421636 kB' 'Shmem: 6504984 kB' 'KernelStack: 5288 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 244396 kB' 'Slab: 429440 kB' 'SReclaimable: 244396 kB' 'SUnreclaim: 185044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.249 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.249 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # continue 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.250 15:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.250 15:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.250 15:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.250 15:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.250 15:59:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.250 15:59:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.250 15:59:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:24.250 node0=512 expecting 513 00:03:24.250 15:59:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.250 15:59:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.250 15:59:25 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:24.250 node1=513 expecting 512 00:03:24.250 15:59:25 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:24.250 00:03:24.250 real 0m1.397s 00:03:24.250 user 0m0.601s 00:03:24.250 sys 0m0.760s 00:03:24.250 15:59:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.250 15:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:24.250 ************************************ 00:03:24.250 END TEST odd_alloc 00:03:24.250 ************************************ 00:03:24.250 15:59:25 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:24.250 15:59:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.250 15:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.250 15:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:24.250 ************************************ 00:03:24.250 START TEST custom_alloc 00:03:24.250 ************************************ 00:03:24.250 15:59:25 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:24.250 15:59:25 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:24.250 15:59:25 -- setup/hugepages.sh@169 -- # local node 00:03:24.250 15:59:25 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:24.250 15:59:25 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:24.250 15:59:25 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:24.250 15:59:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.250 15:59:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.250 15:59:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.250 15:59:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.250 15:59:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.250 15:59:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.250 15:59:25 -- setup/hugepages.sh@83 -- # : 256 00:03:24.250 15:59:25 -- setup/hugepages.sh@84 -- # : 1 00:03:24.250 15:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.250 15:59:25 -- setup/hugepages.sh@83 -- # : 0 00:03:24.250 15:59:25 -- setup/hugepages.sh@84 -- # : 0 00:03:24.250 15:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:24.250 15:59:25 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:24.250 15:59:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.250 15:59:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.250 15:59:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.250 15:59:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.250 15:59:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.250 15:59:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.250 15:59:25 -- setup/hugepages.sh@78 -- # return 0 00:03:24.250 15:59:25 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.250 15:59:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.250 15:59:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.250 15:59:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.250 15:59:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.250 15:59:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.250 15:59:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.250 15:59:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.250 15:59:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.250 15:59:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.250 15:59:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.250 15:59:25 -- setup/hugepages.sh@78 -- # return 0 00:03:24.250 15:59:25 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.250 15:59:25 -- setup/hugepages.sh@187 -- # setup output 00:03:24.250 15:59:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.250 15:59:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.622 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.622 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.622 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.622 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.622 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.622 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.622 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.622 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.622 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.622 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.622 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.622 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.622 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.622 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.622 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.622 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.622 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.622 15:59:26 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:25.622 15:59:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:25.622 15:59:26 -- setup/hugepages.sh@89 -- # local node 00:03:25.622 15:59:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.622 15:59:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.622 15:59:26 -- setup/hugepages.sh@92 -- # local surp 00:03:25.622 15:59:26 -- setup/hugepages.sh@93 -- # local resv 00:03:25.622 15:59:26 -- setup/hugepages.sh@94 -- # local anon 00:03:25.622 15:59:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.622 15:59:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.622 15:59:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.622 15:59:26 -- setup/common.sh@18 -- # local node= 00:03:25.622 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.622 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.622 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.622 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.622 15:59:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.622 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.622 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 46015808 kB' 'MemAvailable: 46985368 kB' 'Buffers: 1308 kB' 'Cached: 13941016 kB' 'SwapCached: 0 kB' 'Active: 14003344 kB' 'Inactive: 545320 kB' 'Active(anon): 13333112 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609196 kB' 'Mapped: 178120 kB' 'Shmem: 12726772 kB' 'KReclaimable: 419320 kB' 'Slab: 798612 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379292 kB' 'KernelStack: 12880 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39667084 kB' 'Committed_AS: 14514092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.622 15:59:26 -- setup/common.sh@33 -- # echo 0 00:03:25.622 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.622 15:59:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.622 15:59:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.622 15:59:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.622 15:59:26 -- setup/common.sh@18 -- # local node= 00:03:25.622 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.622 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.622 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.622 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.622 15:59:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.622 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.622 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 46015808 kB' 'MemAvailable: 46985368 kB' 'Buffers: 1308 kB' 'Cached: 13941016 kB' 'SwapCached: 0 kB' 'Active: 14003356 kB' 'Inactive: 545320 kB' 'Active(anon): 13333124 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609192 kB' 'Mapped: 178120 kB' 'Shmem: 12726772 kB' 'KReclaimable: 419320 kB' 'Slab: 798608 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379288 kB' 'KernelStack: 12864 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39667084 kB' 'Committed_AS: 14514104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197184 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.623 15:59:26 -- setup/common.sh@33 -- # echo 0 00:03:25.623 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.623 15:59:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.623 15:59:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.623 15:59:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.623 15:59:26 -- setup/common.sh@18 -- # local node= 00:03:25.623 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.623 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.623 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.623 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.623 15:59:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.623 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.623 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 46015556 kB' 'MemAvailable: 46985116 kB' 'Buffers: 1308 kB' 'Cached: 13941028 kB' 'SwapCached: 0 kB' 'Active: 14002488 kB' 'Inactive: 545320 kB' 'Active(anon): 13332256 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608780 kB' 'Mapped: 178272 kB' 'Shmem: 12726784 kB' 'KReclaimable: 419320 kB' 'Slab: 798604 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379284 kB' 'KernelStack: 12896 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39667084 kB' 'Committed_AS: 14514620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197184 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.623 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.623 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.624 15:59:26 -- setup/common.sh@33 -- # echo 0 00:03:25.624 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.624 15:59:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.624 15:59:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:25.624 nr_hugepages=1536 00:03:25.624 15:59:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.624 resv_hugepages=0 00:03:25.624 15:59:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.624 surplus_hugepages=0 00:03:25.624 15:59:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.624 anon_hugepages=0 00:03:25.624 15:59:26 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.624 15:59:26 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:25.624 15:59:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.624 15:59:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.624 15:59:26 -- setup/common.sh@18 -- # local node= 00:03:25.624 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.624 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.624 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.624 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.624 15:59:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.624 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.624 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 46015808 kB' 'MemAvailable: 46985368 kB' 'Buffers: 1308 kB' 'Cached: 13941052 kB' 'SwapCached: 0 kB' 'Active: 14002648 kB' 'Inactive: 545320 kB' 'Active(anon): 13332416 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608944 kB' 'Mapped: 178272 kB' 'Shmem: 12726808 kB' 'KReclaimable: 419320 kB' 'Slab: 798604 kB' 'SReclaimable: 419320 kB' 'SUnreclaim: 379284 kB' 'KernelStack: 12896 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39667084 kB' 'Committed_AS: 14514632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197200 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.624 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.624 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.625 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.625 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.625 15:59:26 -- setup/common.sh@33 -- # echo 1536 00:03:25.625 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.625 15:59:26 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.625 15:59:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.625 15:59:26 -- setup/hugepages.sh@27 -- # local node 00:03:25.625 15:59:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.625 15:59:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.625 15:59:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.625 15:59:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.625 15:59:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.625 15:59:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.625 15:59:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.625 15:59:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.625 15:59:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.625 15:59:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.626 15:59:26 -- setup/common.sh@18 -- # local node=0 00:03:25.626 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.626 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.626 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.626 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.626 15:59:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.626 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.626 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 23699548 kB' 'MemUsed: 9177396 kB' 'SwapCached: 0 kB' 'Active: 6829304 kB' 'Inactive: 347252 kB' 'Active(anon): 6405964 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992420 kB' 'Mapped: 74828 kB' 'AnonPages: 187360 kB' 'Shmem: 6221828 kB' 'KernelStack: 7624 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174924 kB' 'Slab: 369352 kB' 'SReclaimable: 174924 kB' 'SUnreclaim: 194428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@33 -- # echo 0 00:03:25.626 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.626 15:59:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.626 15:59:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.626 15:59:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.626 15:59:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.626 15:59:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.626 15:59:26 -- setup/common.sh@18 -- # local node=1 00:03:25.626 15:59:26 -- setup/common.sh@19 -- # local var val 00:03:25.626 15:59:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.626 15:59:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.626 15:59:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.626 15:59:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.626 15:59:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.626 15:59:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825744 kB' 'MemFree: 22316260 kB' 'MemUsed: 10509484 kB' 'SwapCached: 0 kB' 'Active: 7173392 kB' 'Inactive: 198068 kB' 'Active(anon): 6926500 kB' 'Inactive(anon): 0 kB' 'Active(file): 246892 kB' 'Inactive(file): 198068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6949956 kB' 'Mapped: 103444 kB' 'AnonPages: 421616 kB' 'Shmem: 6504996 kB' 'KernelStack: 5288 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 244396 kB' 'Slab: 429252 kB' 'SReclaimable: 244396 kB' 'SUnreclaim: 184856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.626 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.626 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # continue 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.627 15:59:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.627 15:59:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.627 15:59:26 -- setup/common.sh@33 -- # echo 0 00:03:25.627 15:59:26 -- setup/common.sh@33 -- # return 0 00:03:25.627 15:59:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.627 15:59:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.627 15:59:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.627 15:59:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.627 15:59:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.627 node0=512 expecting 512 00:03:25.627 15:59:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.627 15:59:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.627 15:59:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.627 15:59:26 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:25.627 node1=1024 expecting 1024 00:03:25.627 15:59:26 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:25.627 00:03:25.627 real 0m1.409s 00:03:25.627 user 0m0.618s 00:03:25.627 sys 0m0.752s 00:03:25.627 15:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.627 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:03:25.627 ************************************ 00:03:25.627 END TEST custom_alloc 00:03:25.627 ************************************ 00:03:25.627 15:59:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:25.627 15:59:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.627 15:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.627 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:03:25.885 ************************************ 00:03:25.885 START TEST no_shrink_alloc 00:03:25.885 ************************************ 00:03:25.885 15:59:26 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:25.885 15:59:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:25.885 15:59:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.885 15:59:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.885 15:59:26 -- setup/hugepages.sh@51 -- # shift 00:03:25.885 15:59:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.885 15:59:26 -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.885 15:59:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.885 15:59:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.885 15:59:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.885 15:59:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.885 15:59:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.885 15:59:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.885 15:59:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.885 15:59:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.885 15:59:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.885 15:59:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.885 15:59:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.885 15:59:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.885 15:59:26 -- setup/hugepages.sh@73 -- # return 0 00:03:25.885 15:59:26 -- setup/hugepages.sh@198 -- # setup output 00:03:25.885 15:59:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.885 15:59:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.823 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.823 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.823 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.823 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.823 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.823 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.823 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.823 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.823 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.823 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.823 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.823 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.823 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.823 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.823 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.823 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.823 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.085 15:59:28 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:27.085 15:59:28 -- setup/hugepages.sh@89 -- # local node 00:03:27.085 15:59:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.085 15:59:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.085 15:59:28 -- setup/hugepages.sh@92 -- # local surp 00:03:27.085 15:59:28 -- setup/hugepages.sh@93 -- # local resv 00:03:27.085 15:59:28 -- setup/hugepages.sh@94 -- # local anon 00:03:27.085 15:59:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.085 15:59:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.085 15:59:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.085 15:59:28 -- setup/common.sh@18 -- # local node= 00:03:27.085 15:59:28 -- setup/common.sh@19 -- # local var val 00:03:27.085 15:59:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.085 15:59:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.085 15:59:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.085 15:59:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.085 15:59:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.085 15:59:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47051360 kB' 'MemAvailable: 48020904 kB' 'Buffers: 1308 kB' 'Cached: 13941116 kB' 'SwapCached: 0 kB' 'Active: 14003392 kB' 'Inactive: 545320 kB' 'Active(anon): 13333160 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609572 kB' 'Mapped: 178360 kB' 'Shmem: 12726872 kB' 'KReclaimable: 419304 kB' 'Slab: 798492 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379188 kB' 'KernelStack: 12912 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14514684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197296 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.085 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.086 15:59:28 -- setup/common.sh@33 -- # echo 0 00:03:27.086 15:59:28 -- setup/common.sh@33 -- # return 0 00:03:27.086 15:59:28 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.086 15:59:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.086 15:59:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.086 15:59:28 -- setup/common.sh@18 -- # local node= 00:03:27.086 15:59:28 -- setup/common.sh@19 -- # local var val 00:03:27.086 15:59:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.086 15:59:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.086 15:59:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.086 15:59:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.086 15:59:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.086 15:59:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47051732 kB' 'MemAvailable: 48021276 kB' 'Buffers: 1308 kB' 'Cached: 13941120 kB' 'SwapCached: 0 kB' 'Active: 14003332 kB' 'Inactive: 545320 kB' 'Active(anon): 13333100 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609524 kB' 'Mapped: 178360 kB' 'Shmem: 12726876 kB' 'KReclaimable: 419304 kB' 'Slab: 798492 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379188 kB' 'KernelStack: 12864 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14514696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197248 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.087 15:59:28 -- setup/common.sh@33 -- # echo 0 00:03:27.087 15:59:28 -- setup/common.sh@33 -- # return 0 00:03:27.087 15:59:28 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.087 15:59:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.087 15:59:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.087 15:59:28 -- setup/common.sh@18 -- # local node= 00:03:27.087 15:59:28 -- setup/common.sh@19 -- # local var val 00:03:27.087 15:59:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.087 15:59:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.087 15:59:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.087 15:59:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.087 15:59:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.087 15:59:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47051484 kB' 'MemAvailable: 48021028 kB' 'Buffers: 1308 kB' 'Cached: 13941132 kB' 'SwapCached: 0 kB' 'Active: 14002904 kB' 'Inactive: 545320 kB' 'Active(anon): 13332672 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609064 kB' 'Mapped: 178340 kB' 'Shmem: 12726888 kB' 'KReclaimable: 419304 kB' 'Slab: 798532 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379228 kB' 'KernelStack: 12912 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14514712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197264 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 15:59:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 15:59:28 -- setup/common.sh@33 -- # echo 0 00:03:27.088 15:59:28 -- setup/common.sh@33 -- # return 0 00:03:27.088 15:59:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.088 15:59:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.088 nr_hugepages=1024 00:03:27.088 15:59:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.088 resv_hugepages=0 00:03:27.088 15:59:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.088 surplus_hugepages=0 00:03:27.088 15:59:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.088 anon_hugepages=0 00:03:27.088 15:59:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.088 15:59:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.088 15:59:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.088 15:59:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.088 15:59:28 -- setup/common.sh@18 -- # local node= 00:03:27.088 15:59:28 -- setup/common.sh@19 -- # local var val 00:03:27.088 15:59:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.088 15:59:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.088 15:59:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.088 15:59:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.088 15:59:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.088 15:59:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47051484 kB' 'MemAvailable: 48021028 kB' 'Buffers: 1308 kB' 'Cached: 13941132 kB' 'SwapCached: 0 kB' 'Active: 14002568 kB' 'Inactive: 545320 kB' 'Active(anon): 13332336 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608728 kB' 'Mapped: 178340 kB' 'Shmem: 12726888 kB' 'KReclaimable: 419304 kB' 'Slab: 798532 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379228 kB' 'KernelStack: 12896 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14514724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197264 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 15:59:28 -- setup/common.sh@33 -- # echo 1024 00:03:27.089 15:59:28 -- setup/common.sh@33 -- # return 0 00:03:27.089 15:59:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.089 15:59:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.089 15:59:28 -- setup/hugepages.sh@27 -- # local node 00:03:27.089 15:59:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.089 15:59:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.089 15:59:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.089 15:59:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.089 15:59:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.089 15:59:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.089 15:59:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.089 15:59:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.089 15:59:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.089 15:59:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.089 15:59:28 -- setup/common.sh@18 -- # local node=0 00:03:27.089 15:59:28 -- setup/common.sh@19 -- # local var val 00:03:27.089 15:59:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.089 15:59:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.089 15:59:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.089 15:59:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.089 15:59:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.089 15:59:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 22648376 kB' 'MemUsed: 10228568 kB' 'SwapCached: 0 kB' 'Active: 6828968 kB' 'Inactive: 347252 kB' 'Active(anon): 6405628 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992512 kB' 'Mapped: 74844 kB' 'AnonPages: 186936 kB' 'Shmem: 6221920 kB' 'KernelStack: 7560 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174908 kB' 'Slab: 369216 kB' 'SReclaimable: 174908 kB' 'SUnreclaim: 194308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 15:59:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # continue 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 15:59:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 15:59:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 15:59:28 -- setup/common.sh@33 -- # echo 0 00:03:27.090 15:59:28 -- setup/common.sh@33 -- # return 0 00:03:27.090 15:59:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.090 15:59:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.090 15:59:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.090 15:59:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.090 15:59:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.090 node0=1024 expecting 1024 00:03:27.090 15:59:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.090 15:59:28 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:27.090 15:59:28 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:27.090 15:59:28 -- setup/hugepages.sh@202 -- # setup output 00:03:27.090 15:59:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.090 15:59:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.464 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.465 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.465 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.465 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.465 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.465 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.465 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.465 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.465 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.465 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.465 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.465 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.465 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.465 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.465 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.465 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.465 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.465 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:28.465 15:59:29 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:28.465 15:59:29 -- setup/hugepages.sh@89 -- # local node 00:03:28.465 15:59:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.465 15:59:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.465 15:59:29 -- setup/hugepages.sh@92 -- # local surp 00:03:28.465 15:59:29 -- setup/hugepages.sh@93 -- # local resv 00:03:28.465 15:59:29 -- setup/hugepages.sh@94 -- # local anon 00:03:28.465 15:59:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.465 15:59:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.465 15:59:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.465 15:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.465 15:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.465 15:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.465 15:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.465 15:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.465 15:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.465 15:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.465 15:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47038916 kB' 'MemAvailable: 48008460 kB' 'Buffers: 1308 kB' 'Cached: 13941188 kB' 'SwapCached: 0 kB' 'Active: 14003760 kB' 'Inactive: 545320 kB' 'Active(anon): 13333528 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609736 kB' 'Mapped: 178360 kB' 'Shmem: 12726944 kB' 'KReclaimable: 419304 kB' 'Slab: 798440 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379136 kB' 'KernelStack: 13056 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14517300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197440 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.465 15:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.465 15:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.465 15:59:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.465 15:59:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.465 15:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.465 15:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.465 15:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.465 15:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.465 15:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.465 15:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.465 15:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.465 15:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.465 15:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47038596 kB' 'MemAvailable: 48008140 kB' 'Buffers: 1308 kB' 'Cached: 13941192 kB' 'SwapCached: 0 kB' 'Active: 14004792 kB' 'Inactive: 545320 kB' 'Active(anon): 13334560 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610744 kB' 'Mapped: 178360 kB' 'Shmem: 12726948 kB' 'KReclaimable: 419304 kB' 'Slab: 798440 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379136 kB' 'KernelStack: 13328 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14517312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197440 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 15:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 15:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.466 15:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.466 15:59:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.466 15:59:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.466 15:59:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.466 15:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.466 15:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.466 15:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.466 15:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.466 15:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.466 15:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.466 15:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.467 15:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47037536 kB' 'MemAvailable: 48007080 kB' 'Buffers: 1308 kB' 'Cached: 13941204 kB' 'SwapCached: 0 kB' 'Active: 14003704 kB' 'Inactive: 545320 kB' 'Active(anon): 13333472 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609692 kB' 'Mapped: 178272 kB' 'Shmem: 12726960 kB' 'KReclaimable: 419304 kB' 'Slab: 798440 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379136 kB' 'KernelStack: 13008 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14515936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197376 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.467 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.467 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.467 15:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.467 15:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.467 15:59:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.467 15:59:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.467 nr_hugepages=1024 00:03:28.467 15:59:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.467 resv_hugepages=0 00:03:28.467 15:59:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.467 surplus_hugepages=0 00:03:28.467 15:59:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.467 anon_hugepages=0 00:03:28.467 15:59:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.467 15:59:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.467 15:59:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.467 15:59:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.467 15:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.467 15:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.467 15:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.467 15:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.467 15:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.467 15:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.467 15:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.467 15:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65702688 kB' 'MemFree: 47034992 kB' 'MemAvailable: 48004536 kB' 'Buffers: 1308 kB' 'Cached: 13941216 kB' 'SwapCached: 0 kB' 'Active: 14005228 kB' 'Inactive: 545320 kB' 'Active(anon): 13334996 kB' 'Inactive(anon): 0 kB' 'Active(file): 670232 kB' 'Inactive(file): 545320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611152 kB' 'Mapped: 178196 kB' 'Shmem: 12726972 kB' 'KReclaimable: 419304 kB' 'Slab: 798440 kB' 'SReclaimable: 419304 kB' 'SUnreclaim: 379136 kB' 'KernelStack: 13504 kB' 'PageTables: 9804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40191372 kB' 'Committed_AS: 14517340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197520 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1695324 kB' 'DirectMap2M: 16050176 kB' 'DirectMap1G: 51380224 kB' 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.468 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.468 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.469 15:59:29 -- setup/common.sh@33 -- # echo 1024 00:03:28.469 15:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.469 15:59:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.469 15:59:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.469 15:59:29 -- setup/hugepages.sh@27 -- # local node 00:03:28.469 15:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.469 15:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.469 15:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.469 15:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.469 15:59:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.469 15:59:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.469 15:59:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.469 15:59:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.469 15:59:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.469 15:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.469 15:59:29 -- setup/common.sh@18 -- # local node=0 00:03:28.469 15:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.469 15:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.469 15:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.469 15:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.469 15:59:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.469 15:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.469 15:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876944 kB' 'MemFree: 22644900 kB' 'MemUsed: 10232044 kB' 'SwapCached: 0 kB' 'Active: 6829480 kB' 'Inactive: 347252 kB' 'Active(anon): 6406140 kB' 'Inactive(anon): 0 kB' 'Active(file): 423340 kB' 'Inactive(file): 347252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6992584 kB' 'Mapped: 74768 kB' 'AnonPages: 187276 kB' 'Shmem: 6221992 kB' 'KernelStack: 7576 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 174908 kB' 'Slab: 369308 kB' 'SReclaimable: 174908 kB' 'SUnreclaim: 194400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # continue 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.469 15:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.469 15:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.469 15:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.469 15:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.469 15:59:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.469 15:59:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.469 15:59:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.469 15:59:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.469 15:59:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.469 node0=1024 expecting 1024 00:03:28.469 15:59:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.469 00:03:28.469 real 0m2.789s 00:03:28.469 user 0m1.111s 00:03:28.469 sys 0m1.602s 00:03:28.469 15:59:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.469 15:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.469 ************************************ 00:03:28.469 END TEST no_shrink_alloc 00:03:28.469 ************************************ 00:03:28.469 15:59:29 -- setup/hugepages.sh@217 -- # clear_hp 00:03:28.469 15:59:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:28.469 15:59:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.469 15:59:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.469 15:59:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.469 15:59:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.469 15:59:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.728 15:59:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.729 15:59:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.729 15:59:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.729 15:59:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.729 15:59:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.729 15:59:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:28.729 15:59:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:28.729 00:03:28.729 real 0m11.606s 00:03:28.729 user 0m4.363s 00:03:28.729 sys 0m6.009s 00:03:28.729 15:59:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.729 15:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.729 ************************************ 00:03:28.729 END TEST hugepages 00:03:28.729 ************************************ 00:03:28.729 15:59:29 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:28.729 15:59:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.729 15:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.729 15:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.729 ************************************ 00:03:28.729 START TEST driver 00:03:28.729 ************************************ 00:03:28.729 15:59:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:28.729 * Looking for test storage... 00:03:28.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:28.729 15:59:29 -- setup/driver.sh@68 -- # setup reset 00:03:28.729 15:59:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.729 15:59:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.257 15:59:32 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:31.257 15:59:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.257 15:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.257 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:03:31.257 ************************************ 00:03:31.257 START TEST guess_driver 00:03:31.257 ************************************ 00:03:31.257 15:59:32 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:31.257 15:59:32 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:31.257 15:59:32 -- setup/driver.sh@47 -- # local fail=0 00:03:31.257 15:59:32 -- setup/driver.sh@49 -- # pick_driver 00:03:31.257 15:59:32 -- setup/driver.sh@36 -- # vfio 00:03:31.257 15:59:32 -- setup/driver.sh@21 -- # local iommu_grups 00:03:31.257 15:59:32 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:31.257 15:59:32 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:31.257 15:59:32 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:31.257 15:59:32 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:31.257 15:59:32 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:31.257 15:59:32 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:31.257 15:59:32 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:31.257 15:59:32 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:31.257 15:59:32 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:31.257 15:59:32 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:31.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:31.257 15:59:32 -- setup/driver.sh@30 -- # return 0 00:03:31.257 15:59:32 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:31.257 15:59:32 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:31.257 15:59:32 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:31.257 15:59:32 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:31.257 Looking for driver=vfio-pci 00:03:31.257 15:59:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.257 15:59:32 -- setup/driver.sh@45 -- # setup output config 00:03:31.257 15:59:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.257 15:59:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.191 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.191 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.191 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.449 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.449 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.449 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.449 15:59:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.449 15:59:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.449 15:59:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.384 15:59:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.384 15:59:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.384 15:59:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.384 15:59:34 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:33.384 15:59:34 -- setup/driver.sh@65 -- # setup reset 00:03:33.384 15:59:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.384 15:59:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.915 00:03:35.915 real 0m4.616s 00:03:35.915 user 0m1.039s 00:03:35.915 sys 0m1.732s 00:03:35.915 15:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.915 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:03:35.915 ************************************ 00:03:35.915 END TEST guess_driver 00:03:35.915 ************************************ 00:03:35.915 00:03:35.915 real 0m7.050s 00:03:35.915 user 0m1.569s 00:03:35.915 sys 0m2.776s 00:03:35.915 15:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.915 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:03:35.915 ************************************ 00:03:35.915 END TEST driver 00:03:35.915 ************************************ 00:03:35.915 15:59:36 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:35.915 15:59:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.915 15:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.915 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:03:35.915 ************************************ 00:03:35.915 START TEST devices 00:03:35.915 ************************************ 00:03:35.915 15:59:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:35.915 * Looking for test storage... 00:03:35.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.915 15:59:37 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:35.915 15:59:37 -- setup/devices.sh@192 -- # setup reset 00:03:35.915 15:59:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.915 15:59:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.288 15:59:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:37.288 15:59:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:37.288 15:59:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:37.288 15:59:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:37.288 15:59:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:37.288 15:59:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:37.288 15:59:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:37.288 15:59:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.288 15:59:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:37.288 15:59:38 -- setup/devices.sh@196 -- # blocks=() 00:03:37.288 15:59:38 -- setup/devices.sh@196 -- # declare -a blocks 00:03:37.288 15:59:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:37.288 15:59:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:37.288 15:59:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:37.288 15:59:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:37.288 15:59:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:37.288 15:59:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:37.288 15:59:38 -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:37.288 15:59:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:37.288 15:59:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:37.288 15:59:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:37.288 15:59:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:37.546 No valid GPT data, bailing 00:03:37.547 15:59:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.547 15:59:38 -- scripts/common.sh@391 -- # pt= 00:03:37.547 15:59:38 -- scripts/common.sh@392 -- # return 1 00:03:37.547 15:59:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:37.547 15:59:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:37.547 15:59:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:37.547 15:59:38 -- setup/common.sh@80 -- # echo 1000204886016 00:03:37.547 15:59:38 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:37.547 15:59:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:37.547 15:59:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:37.547 15:59:38 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:37.547 15:59:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:37.547 15:59:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:37.547 15:59:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.547 15:59:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.547 15:59:38 -- common/autotest_common.sh@10 -- # set +x 00:03:37.547 ************************************ 00:03:37.547 START TEST nvme_mount 00:03:37.547 ************************************ 00:03:37.547 15:59:38 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:37.547 15:59:38 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:37.547 15:59:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:37.547 15:59:38 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.547 15:59:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.547 15:59:38 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:37.547 15:59:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.547 15:59:38 -- setup/common.sh@40 -- # local part_no=1 00:03:37.547 15:59:38 -- setup/common.sh@41 -- # local size=1073741824 00:03:37.547 15:59:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.547 15:59:38 -- setup/common.sh@44 -- # parts=() 00:03:37.547 15:59:38 -- setup/common.sh@44 -- # local parts 00:03:37.547 15:59:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.547 15:59:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.547 15:59:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.547 15:59:38 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.547 15:59:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.547 15:59:38 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:37.547 15:59:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.547 15:59:38 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:38.482 Creating new GPT entries in memory. 00:03:38.482 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.482 other utilities. 00:03:38.482 15:59:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.482 15:59:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.482 15:59:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.482 15:59:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.482 15:59:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:39.493 Creating new GPT entries in memory. 00:03:39.493 The operation has completed successfully. 00:03:39.493 15:59:40 -- setup/common.sh@57 -- # (( part++ )) 00:03:39.493 15:59:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.493 15:59:40 -- setup/common.sh@62 -- # wait 3271357 00:03:39.493 15:59:40 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.493 15:59:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:39.493 15:59:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.493 15:59:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:39.493 15:59:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:39.752 15:59:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.752 15:59:40 -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.752 15:59:40 -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:39.752 15:59:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:39.752 15:59:40 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.752 15:59:40 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.752 15:59:40 -- setup/devices.sh@53 -- # local found=0 00:03:39.752 15:59:40 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.752 15:59:40 -- setup/devices.sh@56 -- # : 00:03:39.752 15:59:40 -- setup/devices.sh@59 -- # local pci status 00:03:39.752 15:59:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.752 15:59:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:39.752 15:59:40 -- setup/devices.sh@47 -- # setup output config 00:03:39.752 15:59:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.752 15:59:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:40.686 15:59:41 -- setup/devices.sh@63 -- # found=1 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.686 15:59:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.686 15:59:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:40.686 15:59:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.686 15:59:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.686 15:59:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.686 15:59:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:40.686 15:59:41 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.686 15:59:41 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.686 15:59:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:40.686 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.686 15:59:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.686 15:59:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.988 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:40.988 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:40.988 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.988 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.989 15:59:42 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:40.989 15:59:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:40.989 15:59:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.989 15:59:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:40.989 15:59:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:40.989 15:59:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.989 15:59:42 -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.989 15:59:42 -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:40.989 15:59:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:40.989 15:59:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.989 15:59:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.989 15:59:42 -- setup/devices.sh@53 -- # local found=0 00:03:40.989 15:59:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.989 15:59:42 -- setup/devices.sh@56 -- # : 00:03:40.989 15:59:42 -- setup/devices.sh@59 -- # local pci status 00:03:40.989 15:59:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.989 15:59:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:40.989 15:59:42 -- setup/devices.sh@47 -- # setup output config 00:03:40.989 15:59:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.989 15:59:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:42.360 15:59:43 -- setup/devices.sh@63 -- # found=1 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.360 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.360 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.361 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.361 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.361 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.361 15:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.361 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.361 15:59:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.361 15:59:43 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:42.361 15:59:43 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.361 15:59:43 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.361 15:59:43 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.361 15:59:43 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.361 15:59:43 -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:42.361 15:59:43 -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:42.361 15:59:43 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:42.361 15:59:43 -- setup/devices.sh@50 -- # local mount_point= 00:03:42.361 15:59:43 -- setup/devices.sh@51 -- # local test_file= 00:03:42.361 15:59:43 -- setup/devices.sh@53 -- # local found=0 00:03:42.361 15:59:43 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:42.361 15:59:43 -- setup/devices.sh@59 -- # local pci status 00:03:42.361 15:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.361 15:59:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:42.361 15:59:43 -- setup/devices.sh@47 -- # setup output config 00:03:42.361 15:59:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.361 15:59:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.296 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.296 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:43.554 15:59:44 -- setup/devices.sh@63 -- # found=1 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.554 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.554 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.555 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.555 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.555 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.555 15:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.555 15:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.555 15:59:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.555 15:59:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:43.555 15:59:44 -- setup/devices.sh@68 -- # return 0 00:03:43.555 15:59:44 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:43.555 15:59:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.555 15:59:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.555 15:59:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.555 15:59:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.555 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.555 00:03:43.555 real 0m6.070s 00:03:43.555 user 0m1.429s 00:03:43.555 sys 0m2.203s 00:03:43.555 15:59:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.555 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:03:43.555 ************************************ 00:03:43.555 END TEST nvme_mount 00:03:43.555 ************************************ 00:03:43.555 15:59:44 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:43.555 15:59:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.555 15:59:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.555 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:03:43.813 ************************************ 00:03:43.813 START TEST dm_mount 00:03:43.813 ************************************ 00:03:43.813 15:59:44 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:43.813 15:59:44 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:43.813 15:59:44 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:43.813 15:59:44 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:43.813 15:59:44 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:43.813 15:59:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.813 15:59:44 -- setup/common.sh@40 -- # local part_no=2 00:03:43.813 15:59:44 -- setup/common.sh@41 -- # local size=1073741824 00:03:43.813 15:59:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.813 15:59:44 -- setup/common.sh@44 -- # parts=() 00:03:43.813 15:59:44 -- setup/common.sh@44 -- # local parts 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.813 15:59:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.813 15:59:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.813 15:59:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.813 15:59:44 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.813 15:59:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.813 15:59:44 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:44.749 Creating new GPT entries in memory. 00:03:44.749 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.749 other utilities. 00:03:44.749 15:59:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.749 15:59:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.749 15:59:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.749 15:59:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.749 15:59:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.684 Creating new GPT entries in memory. 00:03:45.684 The operation has completed successfully. 00:03:45.684 15:59:46 -- setup/common.sh@57 -- # (( part++ )) 00:03:45.684 15:59:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.684 15:59:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.684 15:59:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.684 15:59:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:47.059 The operation has completed successfully. 00:03:47.059 15:59:47 -- setup/common.sh@57 -- # (( part++ )) 00:03:47.059 15:59:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.059 15:59:47 -- setup/common.sh@62 -- # wait 3273742 00:03:47.059 15:59:47 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:47.059 15:59:47 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.059 15:59:47 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:47.059 15:59:47 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:47.059 15:59:47 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:47.059 15:59:47 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.059 15:59:47 -- setup/devices.sh@161 -- # break 00:03:47.059 15:59:47 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.059 15:59:47 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:47.059 15:59:47 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:47.059 15:59:47 -- setup/devices.sh@166 -- # dm=dm-0 00:03:47.059 15:59:47 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:47.059 15:59:47 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:47.059 15:59:47 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.059 15:59:47 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:47.059 15:59:47 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.059 15:59:47 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.059 15:59:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:47.059 15:59:48 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.059 15:59:48 -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:47.059 15:59:48 -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:47.059 15:59:48 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:47.059 15:59:48 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.059 15:59:48 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:47.059 15:59:48 -- setup/devices.sh@53 -- # local found=0 00:03:47.059 15:59:48 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.059 15:59:48 -- setup/devices.sh@56 -- # : 00:03:47.059 15:59:48 -- setup/devices.sh@59 -- # local pci status 00:03:47.059 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.059 15:59:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:47.059 15:59:48 -- setup/devices.sh@47 -- # setup output config 00:03:47.059 15:59:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.059 15:59:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:47.994 15:59:49 -- setup/devices.sh@63 -- # found=1 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.994 15:59:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:47.994 15:59:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.994 15:59:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.994 15:59:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:47.994 15:59:49 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.994 15:59:49 -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:47.994 15:59:49 -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:47.994 15:59:49 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:47.994 15:59:49 -- setup/devices.sh@50 -- # local mount_point= 00:03:47.994 15:59:49 -- setup/devices.sh@51 -- # local test_file= 00:03:47.994 15:59:49 -- setup/devices.sh@53 -- # local found=0 00:03:47.994 15:59:49 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.994 15:59:49 -- setup/devices.sh@59 -- # local pci status 00:03:47.994 15:59:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.994 15:59:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:47.994 15:59:49 -- setup/devices.sh@47 -- # setup output config 00:03:47.994 15:59:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.994 15:59:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:48.928 15:59:50 -- setup/devices.sh@63 -- # found=1 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.928 15:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.928 15:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.186 15:59:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.186 15:59:50 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.186 15:59:50 -- setup/devices.sh@68 -- # return 0 00:03:49.186 15:59:50 -- setup/devices.sh@187 -- # cleanup_dm 00:03:49.186 15:59:50 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.186 15:59:50 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.186 15:59:50 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:49.186 15:59:50 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.186 15:59:50 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:49.186 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.186 15:59:50 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:49.186 15:59:50 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:49.186 00:03:49.186 real 0m5.479s 00:03:49.186 user 0m0.879s 00:03:49.186 sys 0m1.458s 00:03:49.186 15:59:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:49.186 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 END TEST dm_mount 00:03:49.186 ************************************ 00:03:49.186 15:59:50 -- setup/devices.sh@1 -- # cleanup 00:03:49.186 15:59:50 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:49.186 15:59:50 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.186 15:59:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.186 15:59:50 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.186 15:59:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.186 15:59:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.445 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:49.445 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:49.445 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:49.445 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:49.445 15:59:50 -- setup/devices.sh@12 -- # cleanup_dm 00:03:49.445 15:59:50 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.445 15:59:50 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.445 15:59:50 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.445 15:59:50 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:49.445 15:59:50 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.445 15:59:50 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:49.445 00:03:49.445 real 0m13.629s 00:03:49.445 user 0m3.025s 00:03:49.445 sys 0m4.764s 00:03:49.445 15:59:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:49.445 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.445 ************************************ 00:03:49.445 END TEST devices 00:03:49.445 ************************************ 00:03:49.445 00:03:49.445 real 0m42.855s 00:03:49.445 user 0m12.315s 00:03:49.445 sys 0m18.938s 00:03:49.445 15:59:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:49.445 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.445 ************************************ 00:03:49.445 END TEST setup.sh 00:03:49.445 ************************************ 00:03:49.445 15:59:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:50.817 Hugepages 00:03:50.817 node hugesize free / total 00:03:50.817 node0 1048576kB 0 / 0 00:03:50.817 node0 2048kB 2048 / 2048 00:03:50.817 node1 1048576kB 0 / 0 00:03:50.817 node1 2048kB 0 / 0 00:03:50.817 00:03:50.817 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.817 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:50.817 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:50.817 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:50.817 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:50.817 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:50.817 15:59:51 -- spdk/autotest.sh@130 -- # uname -s 00:03:50.817 15:59:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:50.817 15:59:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:50.817 15:59:51 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.750 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:51.750 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:51.750 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:51.750 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:51.750 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:51.750 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:51.750 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.008 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.008 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.008 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:52.944 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.944 15:59:54 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:53.880 15:59:55 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:53.880 15:59:55 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:53.880 15:59:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.880 15:59:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:53.880 15:59:55 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:53.880 15:59:55 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:53.880 15:59:55 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.880 15:59:55 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.880 15:59:55 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:54.137 15:59:55 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:54.137 15:59:55 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:0b:00.0 00:03:54.137 15:59:55 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.070 Waiting for block devices as requested 00:03:55.070 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:55.328 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:55.328 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:55.328 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:55.328 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:55.587 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:55.587 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:55.587 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:55.587 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:55.845 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:55.845 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:55.845 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:55.845 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:56.103 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:56.103 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:56.103 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:56.103 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:56.361 15:59:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:56.361 15:59:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1488 -- # grep 0000:0b:00.0/nvme/nvme 00:03:56.361 15:59:57 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:56.361 15:59:57 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:56.361 15:59:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:56.361 15:59:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:56.361 15:59:57 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:56.361 15:59:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:56.361 15:59:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:56.361 15:59:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:56.361 15:59:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:56.361 15:59:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:56.361 15:59:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:56.361 15:59:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:56.361 15:59:57 -- common/autotest_common.sh@1543 -- # continue 00:03:56.361 15:59:57 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:56.361 15:59:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.361 15:59:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.361 15:59:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:56.361 15:59:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:56.361 15:59:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.361 15:59:57 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.736 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.736 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.736 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.301 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.559 15:59:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:58.559 15:59:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:58.559 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 15:59:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:58.559 15:59:59 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:58.559 15:59:59 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:58.559 15:59:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:58.559 15:59:59 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:58.559 15:59:59 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:58.559 15:59:59 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:58.559 15:59:59 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:58.559 15:59:59 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.559 15:59:59 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.559 15:59:59 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:58.559 15:59:59 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:58.559 15:59:59 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:0b:00.0 00:03:58.559 15:59:59 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:58.559 15:59:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:58.559 15:59:59 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:58.559 15:59:59 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:58.559 15:59:59 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:58.559 15:59:59 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:0b:00.0 00:03:58.559 15:59:59 -- common/autotest_common.sh@1578 -- # [[ -z 0000:0b:00.0 ]] 00:03:58.559 15:59:59 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3278900 00:03:58.559 15:59:59 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.559 15:59:59 -- common/autotest_common.sh@1584 -- # waitforlisten 3278900 00:03:58.559 15:59:59 -- common/autotest_common.sh@817 -- # '[' -z 3278900 ']' 00:03:58.559 15:59:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.559 15:59:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:58.559 15:59:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.559 15:59:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:58.559 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:03:58.817 [2024-04-24 15:59:59.850312] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:03:58.817 [2024-04-24 15:59:59.850410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278900 ] 00:03:58.817 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.817 [2024-04-24 15:59:59.907524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.817 [2024-04-24 16:00:00.014830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.074 16:00:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:59.074 16:00:00 -- common/autotest_common.sh@850 -- # return 0 00:03:59.074 16:00:00 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:59.074 16:00:00 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:59.074 16:00:00 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:02.348 nvme0n1 00:04:02.348 16:00:03 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:02.348 [2024-04-24 16:00:03.591288] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:02.348 [2024-04-24 16:00:03.591329] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:02.348 request: 00:04:02.348 { 00:04:02.348 "nvme_ctrlr_name": "nvme0", 00:04:02.348 "password": "test", 00:04:02.348 "method": "bdev_nvme_opal_revert", 00:04:02.348 "req_id": 1 00:04:02.348 } 00:04:02.348 Got JSON-RPC error response 00:04:02.348 response: 00:04:02.348 { 00:04:02.348 "code": -32603, 00:04:02.348 "message": "Internal error" 00:04:02.348 } 00:04:02.348 16:00:03 -- common/autotest_common.sh@1590 -- # true 00:04:02.348 16:00:03 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:02.348 16:00:03 -- common/autotest_common.sh@1594 -- # killprocess 3278900 00:04:02.348 16:00:03 -- common/autotest_common.sh@936 -- # '[' -z 3278900 ']' 00:04:02.348 16:00:03 -- common/autotest_common.sh@940 -- # kill -0 3278900 00:04:02.348 16:00:03 -- common/autotest_common.sh@941 -- # uname 00:04:02.348 16:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:02.348 16:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3278900 00:04:02.348 16:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:02.348 16:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:02.348 16:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3278900' 00:04:02.348 killing process with pid 3278900 00:04:02.348 16:00:03 -- common/autotest_common.sh@955 -- # kill 3278900 00:04:02.348 16:00:03 -- common/autotest_common.sh@960 -- # wait 3278900 00:04:04.243 16:00:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:04.243 16:00:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:04.243 16:00:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.243 16:00:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.243 16:00:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:04.243 16:00:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:04.243 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.243 16:00:05 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:04.243 16:00:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.243 16:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.243 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.243 ************************************ 00:04:04.243 START TEST env 00:04:04.243 ************************************ 00:04:04.243 16:00:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:04.243 * Looking for test storage... 00:04:04.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:04.244 16:00:05 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.244 16:00:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.244 16:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.244 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 ************************************ 00:04:04.501 START TEST env_memory 00:04:04.501 ************************************ 00:04:04.501 16:00:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.501 00:04:04.501 00:04:04.501 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.501 http://cunit.sourceforge.net/ 00:04:04.501 00:04:04.501 00:04:04.501 Suite: memory 00:04:04.501 Test: alloc and free memory map ...[2024-04-24 16:00:05.633857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.501 passed 00:04:04.501 Test: mem map translation ...[2024-04-24 16:00:05.654773] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.501 [2024-04-24 16:00:05.654795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.501 [2024-04-24 16:00:05.654856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.501 [2024-04-24 16:00:05.654869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.501 passed 00:04:04.501 Test: mem map registration ...[2024-04-24 16:00:05.696966] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.501 [2024-04-24 16:00:05.696985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:04.501 passed 00:04:04.501 Test: mem map adjacent registrations ...passed 00:04:04.501 00:04:04.501 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.501 suites 1 1 n/a 0 0 00:04:04.501 tests 4 4 4 0 0 00:04:04.501 asserts 152 152 152 0 n/a 00:04:04.501 00:04:04.501 Elapsed time = 0.145 seconds 00:04:04.501 00:04:04.501 real 0m0.153s 00:04:04.501 user 0m0.146s 00:04:04.501 sys 0m0.006s 00:04:04.501 16:00:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.501 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 ************************************ 00:04:04.501 END TEST env_memory 00:04:04.501 ************************************ 00:04:04.501 16:00:05 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.501 16:00:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.501 16:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.501 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.759 ************************************ 00:04:04.759 START TEST env_vtophys 00:04:04.759 ************************************ 00:04:04.759 16:00:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.759 EAL: lib.eal log level changed from notice to debug 00:04:04.759 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.759 EAL: Detected lcore 1 as core 1 on socket 0 00:04:04.759 EAL: Detected lcore 2 as core 2 on socket 0 00:04:04.759 EAL: Detected lcore 3 as core 3 on socket 0 00:04:04.759 EAL: Detected lcore 4 as core 4 on socket 0 00:04:04.759 EAL: Detected lcore 5 as core 5 on socket 0 00:04:04.759 EAL: Detected lcore 6 as core 8 on socket 0 00:04:04.759 EAL: Detected lcore 7 as core 9 on socket 0 00:04:04.759 EAL: Detected lcore 8 as core 10 on socket 0 00:04:04.759 EAL: Detected lcore 9 as core 11 on socket 0 00:04:04.759 EAL: Detected lcore 10 as core 12 on socket 0 00:04:04.759 EAL: Detected lcore 11 as core 13 on socket 0 00:04:04.759 EAL: Detected lcore 12 as core 0 on socket 1 00:04:04.759 EAL: Detected lcore 13 as core 1 on socket 1 00:04:04.759 EAL: Detected lcore 14 as core 2 on socket 1 00:04:04.759 EAL: Detected lcore 15 as core 3 on socket 1 00:04:04.760 EAL: Detected lcore 16 as core 4 on socket 1 00:04:04.760 EAL: Detected lcore 17 as core 5 on socket 1 00:04:04.760 EAL: Detected lcore 18 as core 8 on socket 1 00:04:04.760 EAL: Detected lcore 19 as core 9 on socket 1 00:04:04.760 EAL: Detected lcore 20 as core 10 on socket 1 00:04:04.760 EAL: Detected lcore 21 as core 11 on socket 1 00:04:04.760 EAL: Detected lcore 22 as core 12 on socket 1 00:04:04.760 EAL: Detected lcore 23 as core 13 on socket 1 00:04:04.760 EAL: Detected lcore 24 as core 0 on socket 0 00:04:04.760 EAL: Detected lcore 25 as core 1 on socket 0 00:04:04.760 EAL: Detected lcore 26 as core 2 on socket 0 00:04:04.760 EAL: Detected lcore 27 as core 3 on socket 0 00:04:04.760 EAL: Detected lcore 28 as core 4 on socket 0 00:04:04.760 EAL: Detected lcore 29 as core 5 on socket 0 00:04:04.760 EAL: Detected lcore 30 as core 8 on socket 0 00:04:04.760 EAL: Detected lcore 31 as core 9 on socket 0 00:04:04.760 EAL: Detected lcore 32 as core 10 on socket 0 00:04:04.760 EAL: Detected lcore 33 as core 11 on socket 0 00:04:04.760 EAL: Detected lcore 34 as core 12 on socket 0 00:04:04.760 EAL: Detected lcore 35 as core 13 on socket 0 00:04:04.760 EAL: Detected lcore 36 as core 0 on socket 1 00:04:04.760 EAL: Detected lcore 37 as core 1 on socket 1 00:04:04.760 EAL: Detected lcore 38 as core 2 on socket 1 00:04:04.760 EAL: Detected lcore 39 as core 3 on socket 1 00:04:04.760 EAL: Detected lcore 40 as core 4 on socket 1 00:04:04.760 EAL: Detected lcore 41 as core 5 on socket 1 00:04:04.760 EAL: Detected lcore 42 as core 8 on socket 1 00:04:04.760 EAL: Detected lcore 43 as core 9 on socket 1 00:04:04.760 EAL: Detected lcore 44 as core 10 on socket 1 00:04:04.760 EAL: Detected lcore 45 as core 11 on socket 1 00:04:04.760 EAL: Detected lcore 46 as core 12 on socket 1 00:04:04.760 EAL: Detected lcore 47 as core 13 on socket 1 00:04:04.760 EAL: Maximum logical cores by configuration: 128 00:04:04.760 EAL: Detected CPU lcores: 48 00:04:04.760 EAL: Detected NUMA nodes: 2 00:04:04.760 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:04.760 EAL: Detected shared linkage of DPDK 00:04:04.760 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.760 EAL: Bus pci wants IOVA as 'DC' 00:04:04.760 EAL: Buses did not request a specific IOVA mode. 00:04:04.760 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:04.760 EAL: Selected IOVA mode 'VA' 00:04:04.760 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.760 EAL: Probing VFIO support... 00:04:04.760 EAL: IOMMU type 1 (Type 1) is supported 00:04:04.760 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:04.760 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:04.760 EAL: VFIO support initialized 00:04:04.760 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.760 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.760 EAL: Setting up physically contiguous memory... 00:04:04.760 EAL: Setting maximum number of open files to 524288 00:04:04.760 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.760 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:04.760 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.760 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:04.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.760 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:04.760 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.760 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:04.760 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:04.760 EAL: Hugepages will be freed exactly as allocated. 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: TSC frequency is ~2700000 KHz 00:04:04.760 EAL: Main lcore 0 is ready (tid=7f81c9d86a00;cpuset=[0]) 00:04:04.760 EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 0 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.760 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.760 00:04:04.760 00:04:04.760 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.760 http://cunit.sourceforge.net/ 00:04:04.760 00:04:04.760 00:04:04.760 Suite: components_suite 00:04:04.760 Test: vtophys_malloc_test ...passed 00:04:04.760 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 4 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.760 EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 4 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.760 EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 4 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.760 EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 4 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.760 EAL: Trying to obtain current memory policy. 00:04:04.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.760 EAL: Restoring previous memory policy: 4 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.760 EAL: No shared files mode enabled, IPC is disabled 00:04:04.760 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.760 EAL: request: mp_malloc_sync 00:04:04.761 EAL: No shared files mode enabled, IPC is disabled 00:04:04.761 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.761 EAL: Trying to obtain current memory policy. 00:04:04.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.761 EAL: Restoring previous memory policy: 4 00:04:04.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.761 EAL: request: mp_malloc_sync 00:04:04.761 EAL: No shared files mode enabled, IPC is disabled 00:04:04.761 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.761 EAL: request: mp_malloc_sync 00:04:04.761 EAL: No shared files mode enabled, IPC is disabled 00:04:04.761 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.761 EAL: Trying to obtain current memory policy. 00:04:04.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.761 EAL: Restoring previous memory policy: 4 00:04:04.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.761 EAL: request: mp_malloc_sync 00:04:04.761 EAL: No shared files mode enabled, IPC is disabled 00:04:04.761 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.019 EAL: request: mp_malloc_sync 00:04:05.019 EAL: No shared files mode enabled, IPC is disabled 00:04:05.019 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.019 EAL: Trying to obtain current memory policy. 00:04:05.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.019 EAL: Restoring previous memory policy: 4 00:04:05.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.019 EAL: request: mp_malloc_sync 00:04:05.019 EAL: No shared files mode enabled, IPC is disabled 00:04:05.019 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.019 EAL: request: mp_malloc_sync 00:04:05.019 EAL: No shared files mode enabled, IPC is disabled 00:04:05.019 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.019 EAL: Trying to obtain current memory policy. 00:04:05.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.276 EAL: Restoring previous memory policy: 4 00:04:05.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.276 EAL: request: mp_malloc_sync 00:04:05.276 EAL: No shared files mode enabled, IPC is disabled 00:04:05.276 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.534 EAL: request: mp_malloc_sync 00:04:05.534 EAL: No shared files mode enabled, IPC is disabled 00:04:05.534 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.534 EAL: Trying to obtain current memory policy. 00:04:05.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.791 EAL: Restoring previous memory policy: 4 00:04:05.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.791 EAL: request: mp_malloc_sync 00:04:05.791 EAL: No shared files mode enabled, IPC is disabled 00:04:05.791 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.049 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.308 EAL: request: mp_malloc_sync 00:04:06.308 EAL: No shared files mode enabled, IPC is disabled 00:04:06.308 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.308 passed 00:04:06.308 00:04:06.308 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.308 suites 1 1 n/a 0 0 00:04:06.308 tests 2 2 2 0 0 00:04:06.308 asserts 497 497 497 0 n/a 00:04:06.308 00:04:06.308 Elapsed time = 1.359 seconds 00:04:06.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.308 EAL: request: mp_malloc_sync 00:04:06.308 EAL: No shared files mode enabled, IPC is disabled 00:04:06.308 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.308 EAL: No shared files mode enabled, IPC is disabled 00:04:06.308 EAL: No shared files mode enabled, IPC is disabled 00:04:06.308 EAL: No shared files mode enabled, IPC is disabled 00:04:06.308 00:04:06.308 real 0m1.479s 00:04:06.308 user 0m0.841s 00:04:06.308 sys 0m0.599s 00:04:06.308 16:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.308 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 ************************************ 00:04:06.308 END TEST env_vtophys 00:04:06.308 ************************************ 00:04:06.308 16:00:07 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.308 16:00:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.308 16:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.308 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 ************************************ 00:04:06.308 START TEST env_pci 00:04:06.308 ************************************ 00:04:06.308 16:00:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.308 00:04:06.308 00:04:06.308 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.308 http://cunit.sourceforge.net/ 00:04:06.308 00:04:06.308 00:04:06.308 Suite: pci 00:04:06.308 Test: pci_hook ...[2024-04-24 16:00:07.473481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3279948 has claimed it 00:04:06.308 EAL: Cannot find device (10000:00:01.0) 00:04:06.308 EAL: Failed to attach device on primary process 00:04:06.308 passed 00:04:06.308 00:04:06.308 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.308 suites 1 1 n/a 0 0 00:04:06.308 tests 1 1 1 0 0 00:04:06.308 asserts 25 25 25 0 n/a 00:04:06.308 00:04:06.308 Elapsed time = 0.021 seconds 00:04:06.308 00:04:06.308 real 0m0.033s 00:04:06.308 user 0m0.009s 00:04:06.308 sys 0m0.024s 00:04:06.308 16:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.308 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 ************************************ 00:04:06.308 END TEST env_pci 00:04:06.308 ************************************ 00:04:06.308 16:00:07 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:06.308 16:00:07 -- env/env.sh@15 -- # uname 00:04:06.308 16:00:07 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.308 16:00:07 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.308 16:00:07 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.308 16:00:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:06.308 16:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.308 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.567 ************************************ 00:04:06.567 START TEST env_dpdk_post_init 00:04:06.567 ************************************ 00:04:06.567 16:00:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.567 EAL: Detected CPU lcores: 48 00:04:06.567 EAL: Detected NUMA nodes: 2 00:04:06.567 EAL: Detected shared linkage of DPDK 00:04:06.567 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.567 EAL: Selected IOVA mode 'VA' 00:04:06.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.567 EAL: VFIO support initialized 00:04:06.567 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.567 EAL: Using IOMMU type 1 (Type 1) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:06.567 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:07.513 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:07.513 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:10.840 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:10.840 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:10.840 Starting DPDK initialization... 00:04:10.840 Starting SPDK post initialization... 00:04:10.840 SPDK NVMe probe 00:04:10.840 Attaching to 0000:0b:00.0 00:04:10.840 Attached to 0000:0b:00.0 00:04:10.840 Cleaning up... 00:04:10.840 00:04:10.840 real 0m4.365s 00:04:10.840 user 0m3.215s 00:04:10.840 sys 0m0.206s 00:04:10.840 16:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.840 16:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.840 ************************************ 00:04:10.840 END TEST env_dpdk_post_init 00:04:10.840 ************************************ 00:04:10.840 16:00:12 -- env/env.sh@26 -- # uname 00:04:10.840 16:00:12 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.840 16:00:12 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.840 16:00:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.840 16:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.840 16:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:10.840 ************************************ 00:04:10.840 START TEST env_mem_callbacks 00:04:10.840 ************************************ 00:04:10.840 16:00:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.106 EAL: Detected CPU lcores: 48 00:04:11.106 EAL: Detected NUMA nodes: 2 00:04:11.106 EAL: Detected shared linkage of DPDK 00:04:11.106 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.106 EAL: Selected IOVA mode 'VA' 00:04:11.106 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.106 EAL: VFIO support initialized 00:04:11.106 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.106 00:04:11.106 00:04:11.106 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.106 http://cunit.sourceforge.net/ 00:04:11.106 00:04:11.106 00:04:11.106 Suite: memory 00:04:11.107 Test: test ... 00:04:11.107 register 0x200000200000 2097152 00:04:11.107 malloc 3145728 00:04:11.107 register 0x200000400000 4194304 00:04:11.107 buf 0x200000500000 len 3145728 PASSED 00:04:11.107 malloc 64 00:04:11.107 buf 0x2000004fff40 len 64 PASSED 00:04:11.107 malloc 4194304 00:04:11.107 register 0x200000800000 6291456 00:04:11.107 buf 0x200000a00000 len 4194304 PASSED 00:04:11.107 free 0x200000500000 3145728 00:04:11.107 free 0x2000004fff40 64 00:04:11.107 unregister 0x200000400000 4194304 PASSED 00:04:11.107 free 0x200000a00000 4194304 00:04:11.107 unregister 0x200000800000 6291456 PASSED 00:04:11.107 malloc 8388608 00:04:11.107 register 0x200000400000 10485760 00:04:11.107 buf 0x200000600000 len 8388608 PASSED 00:04:11.107 free 0x200000600000 8388608 00:04:11.107 unregister 0x200000400000 10485760 PASSED 00:04:11.107 passed 00:04:11.107 00:04:11.107 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.107 suites 1 1 n/a 0 0 00:04:11.107 tests 1 1 1 0 0 00:04:11.107 asserts 15 15 15 0 n/a 00:04:11.107 00:04:11.107 Elapsed time = 0.005 seconds 00:04:11.107 00:04:11.107 real 0m0.051s 00:04:11.107 user 0m0.013s 00:04:11.107 sys 0m0.037s 00:04:11.107 16:00:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.107 16:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.107 ************************************ 00:04:11.107 END TEST env_mem_callbacks 00:04:11.107 ************************************ 00:04:11.107 00:04:11.107 real 0m6.729s 00:04:11.107 user 0m4.466s 00:04:11.107 sys 0m1.237s 00:04:11.107 16:00:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.107 16:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.107 ************************************ 00:04:11.107 END TEST env 00:04:11.107 ************************************ 00:04:11.107 16:00:12 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.107 16:00:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.107 16:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.107 16:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.107 ************************************ 00:04:11.107 START TEST rpc 00:04:11.107 ************************************ 00:04:11.107 16:00:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.107 * Looking for test storage... 00:04:11.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.107 16:00:12 -- rpc/rpc.sh@65 -- # spdk_pid=3281238 00:04:11.107 16:00:12 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:11.107 16:00:12 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.107 16:00:12 -- rpc/rpc.sh@67 -- # waitforlisten 3281238 00:04:11.107 16:00:12 -- common/autotest_common.sh@817 -- # '[' -z 3281238 ']' 00:04:11.107 16:00:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.107 16:00:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:11.107 16:00:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.107 16:00:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:11.107 16:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.373 [2024-04-24 16:00:12.400992] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:11.373 [2024-04-24 16:00:12.401105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281238 ] 00:04:11.373 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.373 [2024-04-24 16:00:12.465413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.373 [2024-04-24 16:00:12.577023] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.373 [2024-04-24 16:00:12.577092] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3281238' to capture a snapshot of events at runtime. 00:04:11.373 [2024-04-24 16:00:12.577106] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.373 [2024-04-24 16:00:12.577118] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.373 [2024-04-24 16:00:12.577127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3281238 for offline analysis/debug. 00:04:11.373 [2024-04-24 16:00:12.577167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.333 16:00:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:12.333 16:00:13 -- common/autotest_common.sh@850 -- # return 0 00:04:12.333 16:00:13 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.333 16:00:13 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.333 16:00:13 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:12.333 16:00:13 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:12.333 16:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.333 16:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.333 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.333 ************************************ 00:04:12.333 START TEST rpc_integrity 00:04:12.333 ************************************ 00:04:12.333 16:00:13 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:12.333 16:00:13 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.333 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.333 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.333 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.333 16:00:13 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.333 16:00:13 -- rpc/rpc.sh@13 -- # jq length 00:04:12.333 16:00:13 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.333 16:00:13 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.333 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.333 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.333 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.333 16:00:13 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:12.333 16:00:13 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.333 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.333 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.333 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.333 16:00:13 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.333 { 00:04:12.333 "name": "Malloc0", 00:04:12.333 "aliases": [ 00:04:12.333 "2edb4d3e-6449-40f1-a0a8-ccbe9ce99386" 00:04:12.333 ], 00:04:12.333 "product_name": "Malloc disk", 00:04:12.333 "block_size": 512, 00:04:12.333 "num_blocks": 16384, 00:04:12.334 "uuid": "2edb4d3e-6449-40f1-a0a8-ccbe9ce99386", 00:04:12.334 "assigned_rate_limits": { 00:04:12.334 "rw_ios_per_sec": 0, 00:04:12.334 "rw_mbytes_per_sec": 0, 00:04:12.334 "r_mbytes_per_sec": 0, 00:04:12.334 "w_mbytes_per_sec": 0 00:04:12.334 }, 00:04:12.334 "claimed": false, 00:04:12.334 "zoned": false, 00:04:12.334 "supported_io_types": { 00:04:12.334 "read": true, 00:04:12.334 "write": true, 00:04:12.334 "unmap": true, 00:04:12.334 "write_zeroes": true, 00:04:12.334 "flush": true, 00:04:12.334 "reset": true, 00:04:12.334 "compare": false, 00:04:12.334 "compare_and_write": false, 00:04:12.334 "abort": true, 00:04:12.334 "nvme_admin": false, 00:04:12.334 "nvme_io": false 00:04:12.334 }, 00:04:12.334 "memory_domains": [ 00:04:12.334 { 00:04:12.334 "dma_device_id": "system", 00:04:12.334 "dma_device_type": 1 00:04:12.334 }, 00:04:12.334 { 00:04:12.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.334 "dma_device_type": 2 00:04:12.334 } 00:04:12.334 ], 00:04:12.334 "driver_specific": {} 00:04:12.334 } 00:04:12.334 ]' 00:04:12.334 16:00:13 -- rpc/rpc.sh@17 -- # jq length 00:04:12.334 16:00:13 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.334 16:00:13 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:12.334 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.334 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.334 [2024-04-24 16:00:13.519151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:12.334 [2024-04-24 16:00:13.519196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.334 [2024-04-24 16:00:13.519223] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1308e00 00:04:12.334 [2024-04-24 16:00:13.519239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.334 [2024-04-24 16:00:13.520791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.334 [2024-04-24 16:00:13.520817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.334 Passthru0 00:04:12.334 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.334 16:00:13 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.334 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.334 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.334 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.334 16:00:13 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.334 { 00:04:12.334 "name": "Malloc0", 00:04:12.334 "aliases": [ 00:04:12.334 "2edb4d3e-6449-40f1-a0a8-ccbe9ce99386" 00:04:12.334 ], 00:04:12.334 "product_name": "Malloc disk", 00:04:12.334 "block_size": 512, 00:04:12.334 "num_blocks": 16384, 00:04:12.334 "uuid": "2edb4d3e-6449-40f1-a0a8-ccbe9ce99386", 00:04:12.334 "assigned_rate_limits": { 00:04:12.334 "rw_ios_per_sec": 0, 00:04:12.334 "rw_mbytes_per_sec": 0, 00:04:12.334 "r_mbytes_per_sec": 0, 00:04:12.334 "w_mbytes_per_sec": 0 00:04:12.334 }, 00:04:12.334 "claimed": true, 00:04:12.334 "claim_type": "exclusive_write", 00:04:12.334 "zoned": false, 00:04:12.334 "supported_io_types": { 00:04:12.334 "read": true, 00:04:12.334 "write": true, 00:04:12.334 "unmap": true, 00:04:12.334 "write_zeroes": true, 00:04:12.334 "flush": true, 00:04:12.334 "reset": true, 00:04:12.334 "compare": false, 00:04:12.334 "compare_and_write": false, 00:04:12.334 "abort": true, 00:04:12.334 "nvme_admin": false, 00:04:12.334 "nvme_io": false 00:04:12.334 }, 00:04:12.334 "memory_domains": [ 00:04:12.334 { 00:04:12.334 "dma_device_id": "system", 00:04:12.334 "dma_device_type": 1 00:04:12.334 }, 00:04:12.334 { 00:04:12.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.334 "dma_device_type": 2 00:04:12.334 } 00:04:12.334 ], 00:04:12.334 "driver_specific": {} 00:04:12.334 }, 00:04:12.334 { 00:04:12.334 "name": "Passthru0", 00:04:12.334 "aliases": [ 00:04:12.334 "579b0ab0-a49d-5564-b9b0-86a240af2ccf" 00:04:12.334 ], 00:04:12.334 "product_name": "passthru", 00:04:12.334 "block_size": 512, 00:04:12.334 "num_blocks": 16384, 00:04:12.334 "uuid": "579b0ab0-a49d-5564-b9b0-86a240af2ccf", 00:04:12.334 "assigned_rate_limits": { 00:04:12.334 "rw_ios_per_sec": 0, 00:04:12.334 "rw_mbytes_per_sec": 0, 00:04:12.334 "r_mbytes_per_sec": 0, 00:04:12.334 "w_mbytes_per_sec": 0 00:04:12.334 }, 00:04:12.334 "claimed": false, 00:04:12.334 "zoned": false, 00:04:12.334 "supported_io_types": { 00:04:12.334 "read": true, 00:04:12.334 "write": true, 00:04:12.334 "unmap": true, 00:04:12.334 "write_zeroes": true, 00:04:12.334 "flush": true, 00:04:12.334 "reset": true, 00:04:12.334 "compare": false, 00:04:12.334 "compare_and_write": false, 00:04:12.334 "abort": true, 00:04:12.334 "nvme_admin": false, 00:04:12.334 "nvme_io": false 00:04:12.334 }, 00:04:12.334 "memory_domains": [ 00:04:12.334 { 00:04:12.334 "dma_device_id": "system", 00:04:12.334 "dma_device_type": 1 00:04:12.334 }, 00:04:12.334 { 00:04:12.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.334 "dma_device_type": 2 00:04:12.334 } 00:04:12.334 ], 00:04:12.334 "driver_specific": { 00:04:12.334 "passthru": { 00:04:12.334 "name": "Passthru0", 00:04:12.334 "base_bdev_name": "Malloc0" 00:04:12.334 } 00:04:12.334 } 00:04:12.334 } 00:04:12.334 ]' 00:04:12.334 16:00:13 -- rpc/rpc.sh@21 -- # jq length 00:04:12.334 16:00:13 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.334 16:00:13 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.334 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.334 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.334 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.334 16:00:13 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:12.334 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.334 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.334 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.334 16:00:13 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.334 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.334 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.334 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.334 16:00:13 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.334 16:00:13 -- rpc/rpc.sh@26 -- # jq length 00:04:12.658 16:00:13 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.658 00:04:12.658 real 0m0.226s 00:04:12.658 user 0m0.147s 00:04:12.658 sys 0m0.022s 00:04:12.658 16:00:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 ************************************ 00:04:12.658 END TEST rpc_integrity 00:04:12.658 ************************************ 00:04:12.658 16:00:13 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:12.658 16:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.658 16:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 ************************************ 00:04:12.658 START TEST rpc_plugins 00:04:12.658 ************************************ 00:04:12.658 16:00:13 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:12.658 16:00:13 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:12.658 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.658 16:00:13 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:12.658 16:00:13 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:12.658 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.658 16:00:13 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:12.658 { 00:04:12.658 "name": "Malloc1", 00:04:12.658 "aliases": [ 00:04:12.658 "0199ce24-6941-4408-acfc-b612347fad3e" 00:04:12.658 ], 00:04:12.658 "product_name": "Malloc disk", 00:04:12.658 "block_size": 4096, 00:04:12.658 "num_blocks": 256, 00:04:12.658 "uuid": "0199ce24-6941-4408-acfc-b612347fad3e", 00:04:12.658 "assigned_rate_limits": { 00:04:12.658 "rw_ios_per_sec": 0, 00:04:12.658 "rw_mbytes_per_sec": 0, 00:04:12.658 "r_mbytes_per_sec": 0, 00:04:12.658 "w_mbytes_per_sec": 0 00:04:12.658 }, 00:04:12.658 "claimed": false, 00:04:12.658 "zoned": false, 00:04:12.658 "supported_io_types": { 00:04:12.658 "read": true, 00:04:12.658 "write": true, 00:04:12.658 "unmap": true, 00:04:12.658 "write_zeroes": true, 00:04:12.658 "flush": true, 00:04:12.658 "reset": true, 00:04:12.658 "compare": false, 00:04:12.658 "compare_and_write": false, 00:04:12.658 "abort": true, 00:04:12.658 "nvme_admin": false, 00:04:12.658 "nvme_io": false 00:04:12.658 }, 00:04:12.658 "memory_domains": [ 00:04:12.658 { 00:04:12.658 "dma_device_id": "system", 00:04:12.658 "dma_device_type": 1 00:04:12.658 }, 00:04:12.658 { 00:04:12.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.658 "dma_device_type": 2 00:04:12.658 } 00:04:12.658 ], 00:04:12.658 "driver_specific": {} 00:04:12.658 } 00:04:12.658 ]' 00:04:12.658 16:00:13 -- rpc/rpc.sh@32 -- # jq length 00:04:12.658 16:00:13 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:12.658 16:00:13 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:12.658 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.658 16:00:13 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:12.658 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 16:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.658 16:00:13 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:12.658 16:00:13 -- rpc/rpc.sh@36 -- # jq length 00:04:12.658 16:00:13 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:12.658 00:04:12.658 real 0m0.116s 00:04:12.658 user 0m0.071s 00:04:12.658 sys 0m0.013s 00:04:12.658 16:00:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.658 ************************************ 00:04:12.658 END TEST rpc_plugins 00:04:12.658 ************************************ 00:04:12.658 16:00:13 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:12.658 16:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.658 16:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.658 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.957 ************************************ 00:04:12.957 START TEST rpc_trace_cmd_test 00:04:12.957 ************************************ 00:04:12.957 16:00:13 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:12.957 16:00:13 -- rpc/rpc.sh@40 -- # local info 00:04:12.957 16:00:13 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:12.957 16:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.957 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.957 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.957 16:00:14 -- rpc/rpc.sh@42 -- # info='{ 00:04:12.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3281238", 00:04:12.957 "tpoint_group_mask": "0x8", 00:04:12.957 "iscsi_conn": { 00:04:12.957 "mask": "0x2", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "scsi": { 00:04:12.957 "mask": "0x4", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "bdev": { 00:04:12.957 "mask": "0x8", 00:04:12.957 "tpoint_mask": "0xffffffffffffffff" 00:04:12.957 }, 00:04:12.957 "nvmf_rdma": { 00:04:12.957 "mask": "0x10", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "nvmf_tcp": { 00:04:12.957 "mask": "0x20", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "ftl": { 00:04:12.957 "mask": "0x40", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "blobfs": { 00:04:12.957 "mask": "0x80", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "dsa": { 00:04:12.957 "mask": "0x200", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "thread": { 00:04:12.957 "mask": "0x400", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "nvme_pcie": { 00:04:12.957 "mask": "0x800", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "iaa": { 00:04:12.957 "mask": "0x1000", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "nvme_tcp": { 00:04:12.957 "mask": "0x2000", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "bdev_nvme": { 00:04:12.957 "mask": "0x4000", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 }, 00:04:12.957 "sock": { 00:04:12.957 "mask": "0x8000", 00:04:12.957 "tpoint_mask": "0x0" 00:04:12.957 } 00:04:12.957 }' 00:04:12.957 16:00:14 -- rpc/rpc.sh@43 -- # jq length 00:04:12.957 16:00:14 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:12.957 16:00:14 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:12.957 16:00:14 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:12.957 16:00:14 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:12.957 16:00:14 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:12.957 16:00:14 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:12.957 16:00:14 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:12.957 16:00:14 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:12.957 16:00:14 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:12.957 00:04:12.957 real 0m0.194s 00:04:12.957 user 0m0.173s 00:04:12.957 sys 0m0.016s 00:04:12.957 16:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.957 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:12.957 ************************************ 00:04:12.957 END TEST rpc_trace_cmd_test 00:04:12.957 ************************************ 00:04:12.957 16:00:14 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:12.957 16:00:14 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.957 16:00:14 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.957 16:00:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.957 16:00:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.957 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.213 ************************************ 00:04:13.213 START TEST rpc_daemon_integrity 00:04:13.213 ************************************ 00:04:13.213 16:00:14 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:13.213 16:00:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.213 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.213 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.213 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.213 16:00:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.214 16:00:14 -- rpc/rpc.sh@13 -- # jq length 00:04:13.214 16:00:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.214 16:00:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.214 16:00:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.214 { 00:04:13.214 "name": "Malloc2", 00:04:13.214 "aliases": [ 00:04:13.214 "56a3fce5-8fef-4e58-a68b-8ab0635f3a62" 00:04:13.214 ], 00:04:13.214 "product_name": "Malloc disk", 00:04:13.214 "block_size": 512, 00:04:13.214 "num_blocks": 16384, 00:04:13.214 "uuid": "56a3fce5-8fef-4e58-a68b-8ab0635f3a62", 00:04:13.214 "assigned_rate_limits": { 00:04:13.214 "rw_ios_per_sec": 0, 00:04:13.214 "rw_mbytes_per_sec": 0, 00:04:13.214 "r_mbytes_per_sec": 0, 00:04:13.214 "w_mbytes_per_sec": 0 00:04:13.214 }, 00:04:13.214 "claimed": false, 00:04:13.214 "zoned": false, 00:04:13.214 "supported_io_types": { 00:04:13.214 "read": true, 00:04:13.214 "write": true, 00:04:13.214 "unmap": true, 00:04:13.214 "write_zeroes": true, 00:04:13.214 "flush": true, 00:04:13.214 "reset": true, 00:04:13.214 "compare": false, 00:04:13.214 "compare_and_write": false, 00:04:13.214 "abort": true, 00:04:13.214 "nvme_admin": false, 00:04:13.214 "nvme_io": false 00:04:13.214 }, 00:04:13.214 "memory_domains": [ 00:04:13.214 { 00:04:13.214 "dma_device_id": "system", 00:04:13.214 "dma_device_type": 1 00:04:13.214 }, 00:04:13.214 { 00:04:13.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.214 "dma_device_type": 2 00:04:13.214 } 00:04:13.214 ], 00:04:13.214 "driver_specific": {} 00:04:13.214 } 00:04:13.214 ]' 00:04:13.214 16:00:14 -- rpc/rpc.sh@17 -- # jq length 00:04:13.214 16:00:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.214 16:00:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 [2024-04-24 16:00:14.410216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:13.214 [2024-04-24 16:00:14.410259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.214 [2024-04-24 16:00:14.410289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a03b0 00:04:13.214 [2024-04-24 16:00:14.410306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.214 [2024-04-24 16:00:14.411642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.214 [2024-04-24 16:00:14.411674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.214 Passthru0 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.214 { 00:04:13.214 "name": "Malloc2", 00:04:13.214 "aliases": [ 00:04:13.214 "56a3fce5-8fef-4e58-a68b-8ab0635f3a62" 00:04:13.214 ], 00:04:13.214 "product_name": "Malloc disk", 00:04:13.214 "block_size": 512, 00:04:13.214 "num_blocks": 16384, 00:04:13.214 "uuid": "56a3fce5-8fef-4e58-a68b-8ab0635f3a62", 00:04:13.214 "assigned_rate_limits": { 00:04:13.214 "rw_ios_per_sec": 0, 00:04:13.214 "rw_mbytes_per_sec": 0, 00:04:13.214 "r_mbytes_per_sec": 0, 00:04:13.214 "w_mbytes_per_sec": 0 00:04:13.214 }, 00:04:13.214 "claimed": true, 00:04:13.214 "claim_type": "exclusive_write", 00:04:13.214 "zoned": false, 00:04:13.214 "supported_io_types": { 00:04:13.214 "read": true, 00:04:13.214 "write": true, 00:04:13.214 "unmap": true, 00:04:13.214 "write_zeroes": true, 00:04:13.214 "flush": true, 00:04:13.214 "reset": true, 00:04:13.214 "compare": false, 00:04:13.214 "compare_and_write": false, 00:04:13.214 "abort": true, 00:04:13.214 "nvme_admin": false, 00:04:13.214 "nvme_io": false 00:04:13.214 }, 00:04:13.214 "memory_domains": [ 00:04:13.214 { 00:04:13.214 "dma_device_id": "system", 00:04:13.214 "dma_device_type": 1 00:04:13.214 }, 00:04:13.214 { 00:04:13.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.214 "dma_device_type": 2 00:04:13.214 } 00:04:13.214 ], 00:04:13.214 "driver_specific": {} 00:04:13.214 }, 00:04:13.214 { 00:04:13.214 "name": "Passthru0", 00:04:13.214 "aliases": [ 00:04:13.214 "8767e059-f34c-5e1b-8343-3d2e26411c5d" 00:04:13.214 ], 00:04:13.214 "product_name": "passthru", 00:04:13.214 "block_size": 512, 00:04:13.214 "num_blocks": 16384, 00:04:13.214 "uuid": "8767e059-f34c-5e1b-8343-3d2e26411c5d", 00:04:13.214 "assigned_rate_limits": { 00:04:13.214 "rw_ios_per_sec": 0, 00:04:13.214 "rw_mbytes_per_sec": 0, 00:04:13.214 "r_mbytes_per_sec": 0, 00:04:13.214 "w_mbytes_per_sec": 0 00:04:13.214 }, 00:04:13.214 "claimed": false, 00:04:13.214 "zoned": false, 00:04:13.214 "supported_io_types": { 00:04:13.214 "read": true, 00:04:13.214 "write": true, 00:04:13.214 "unmap": true, 00:04:13.214 "write_zeroes": true, 00:04:13.214 "flush": true, 00:04:13.214 "reset": true, 00:04:13.214 "compare": false, 00:04:13.214 "compare_and_write": false, 00:04:13.214 "abort": true, 00:04:13.214 "nvme_admin": false, 00:04:13.214 "nvme_io": false 00:04:13.214 }, 00:04:13.214 "memory_domains": [ 00:04:13.214 { 00:04:13.214 "dma_device_id": "system", 00:04:13.214 "dma_device_type": 1 00:04:13.214 }, 00:04:13.214 { 00:04:13.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.214 "dma_device_type": 2 00:04:13.214 } 00:04:13.214 ], 00:04:13.214 "driver_specific": { 00:04:13.214 "passthru": { 00:04:13.214 "name": "Passthru0", 00:04:13.214 "base_bdev_name": "Malloc2" 00:04:13.214 } 00:04:13.214 } 00:04:13.214 } 00:04:13.214 ]' 00:04:13.214 16:00:14 -- rpc/rpc.sh@21 -- # jq length 00:04:13.214 16:00:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.214 16:00:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.214 16:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.214 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.214 16:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.214 16:00:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.214 16:00:14 -- rpc/rpc.sh@26 -- # jq length 00:04:13.470 16:00:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.470 00:04:13.470 real 0m0.219s 00:04:13.470 user 0m0.142s 00:04:13.470 sys 0m0.025s 00:04:13.470 16:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.470 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.470 ************************************ 00:04:13.470 END TEST rpc_daemon_integrity 00:04:13.470 ************************************ 00:04:13.470 16:00:14 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:13.470 16:00:14 -- rpc/rpc.sh@84 -- # killprocess 3281238 00:04:13.470 16:00:14 -- common/autotest_common.sh@936 -- # '[' -z 3281238 ']' 00:04:13.470 16:00:14 -- common/autotest_common.sh@940 -- # kill -0 3281238 00:04:13.470 16:00:14 -- common/autotest_common.sh@941 -- # uname 00:04:13.470 16:00:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:13.470 16:00:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3281238 00:04:13.470 16:00:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:13.470 16:00:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:13.470 16:00:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3281238' 00:04:13.470 killing process with pid 3281238 00:04:13.470 16:00:14 -- common/autotest_common.sh@955 -- # kill 3281238 00:04:13.470 16:00:14 -- common/autotest_common.sh@960 -- # wait 3281238 00:04:14.034 00:04:14.034 real 0m2.737s 00:04:14.034 user 0m3.458s 00:04:14.034 sys 0m0.786s 00:04:14.034 16:00:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.034 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.034 ************************************ 00:04:14.034 END TEST rpc 00:04:14.034 ************************************ 00:04:14.034 16:00:15 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.034 16:00:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.034 16:00:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.034 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.034 ************************************ 00:04:14.034 START TEST skip_rpc 00:04:14.034 ************************************ 00:04:14.035 16:00:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.035 * Looking for test storage... 00:04:14.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.035 16:00:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.035 16:00:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.035 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 START TEST skip_rpc 00:04:14.035 ************************************ 00:04:14.035 16:00:15 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3281730 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.035 16:00:15 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.293 [2024-04-24 16:00:15.363485] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:14.293 [2024-04-24 16:00:15.363553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281730 ] 00:04:14.293 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.293 [2024-04-24 16:00:15.422865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.293 [2024-04-24 16:00:15.542772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.568 16:00:20 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:19.568 16:00:20 -- common/autotest_common.sh@638 -- # local es=0 00:04:19.568 16:00:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:19.568 16:00:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:19.568 16:00:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:19.568 16:00:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:19.568 16:00:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:19.568 16:00:20 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:19.568 16:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.568 16:00:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.568 16:00:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:19.568 16:00:20 -- common/autotest_common.sh@641 -- # es=1 00:04:19.568 16:00:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:19.568 16:00:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:19.568 16:00:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:19.568 16:00:20 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:19.568 16:00:20 -- rpc/skip_rpc.sh@23 -- # killprocess 3281730 00:04:19.568 16:00:20 -- common/autotest_common.sh@936 -- # '[' -z 3281730 ']' 00:04:19.568 16:00:20 -- common/autotest_common.sh@940 -- # kill -0 3281730 00:04:19.568 16:00:20 -- common/autotest_common.sh@941 -- # uname 00:04:19.568 16:00:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:19.568 16:00:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3281730 00:04:19.568 16:00:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:19.568 16:00:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:19.568 16:00:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3281730' 00:04:19.568 killing process with pid 3281730 00:04:19.568 16:00:20 -- common/autotest_common.sh@955 -- # kill 3281730 00:04:19.568 16:00:20 -- common/autotest_common.sh@960 -- # wait 3281730 00:04:19.568 00:04:19.568 real 0m5.502s 00:04:19.568 user 0m5.184s 00:04:19.568 sys 0m0.314s 00:04:19.568 16:00:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:19.568 16:00:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.568 ************************************ 00:04:19.568 END TEST skip_rpc 00:04:19.568 ************************************ 00:04:19.568 16:00:20 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:19.568 16:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.568 16:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.568 16:00:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.825 ************************************ 00:04:19.825 START TEST skip_rpc_with_json 00:04:19.825 ************************************ 00:04:19.825 16:00:20 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:19.825 16:00:20 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:19.825 16:00:20 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3282423 00:04:19.825 16:00:20 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.825 16:00:20 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.825 16:00:20 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3282423 00:04:19.825 16:00:20 -- common/autotest_common.sh@817 -- # '[' -z 3282423 ']' 00:04:19.825 16:00:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.825 16:00:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:19.825 16:00:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.825 16:00:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:19.825 16:00:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.825 [2024-04-24 16:00:20.991232] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:19.825 [2024-04-24 16:00:20.991329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282423 ] 00:04:19.825 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.825 [2024-04-24 16:00:21.053452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.083 [2024-04-24 16:00:21.168653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.342 16:00:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:20.342 16:00:21 -- common/autotest_common.sh@850 -- # return 0 00:04:20.342 16:00:21 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.342 16:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.342 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.342 [2024-04-24 16:00:21.441129] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.342 request: 00:04:20.342 { 00:04:20.342 "trtype": "tcp", 00:04:20.342 "method": "nvmf_get_transports", 00:04:20.342 "req_id": 1 00:04:20.342 } 00:04:20.342 Got JSON-RPC error response 00:04:20.342 response: 00:04:20.342 { 00:04:20.342 "code": -19, 00:04:20.342 "message": "No such device" 00:04:20.342 } 00:04:20.342 16:00:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:20.342 16:00:21 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.342 16:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.342 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.342 [2024-04-24 16:00:21.449243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.342 16:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.342 16:00:21 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.342 16:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.342 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.342 16:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.342 16:00:21 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.342 { 00:04:20.342 "subsystems": [ 00:04:20.342 { 00:04:20.342 "subsystem": "vfio_user_target", 00:04:20.342 "config": null 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "keyring", 00:04:20.342 "config": [] 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "iobuf", 00:04:20.342 "config": [ 00:04:20.342 { 00:04:20.342 "method": "iobuf_set_options", 00:04:20.342 "params": { 00:04:20.342 "small_pool_count": 8192, 00:04:20.342 "large_pool_count": 1024, 00:04:20.342 "small_bufsize": 8192, 00:04:20.342 "large_bufsize": 135168 00:04:20.342 } 00:04:20.342 } 00:04:20.342 ] 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "sock", 00:04:20.342 "config": [ 00:04:20.342 { 00:04:20.342 "method": "sock_impl_set_options", 00:04:20.342 "params": { 00:04:20.342 "impl_name": "posix", 00:04:20.342 "recv_buf_size": 2097152, 00:04:20.342 "send_buf_size": 2097152, 00:04:20.342 "enable_recv_pipe": true, 00:04:20.342 "enable_quickack": false, 00:04:20.342 "enable_placement_id": 0, 00:04:20.342 "enable_zerocopy_send_server": true, 00:04:20.342 "enable_zerocopy_send_client": false, 00:04:20.342 "zerocopy_threshold": 0, 00:04:20.342 "tls_version": 0, 00:04:20.342 "enable_ktls": false 00:04:20.342 } 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "method": "sock_impl_set_options", 00:04:20.342 "params": { 00:04:20.342 "impl_name": "ssl", 00:04:20.342 "recv_buf_size": 4096, 00:04:20.342 "send_buf_size": 4096, 00:04:20.342 "enable_recv_pipe": true, 00:04:20.342 "enable_quickack": false, 00:04:20.342 "enable_placement_id": 0, 00:04:20.342 "enable_zerocopy_send_server": true, 00:04:20.342 "enable_zerocopy_send_client": false, 00:04:20.342 "zerocopy_threshold": 0, 00:04:20.342 "tls_version": 0, 00:04:20.342 "enable_ktls": false 00:04:20.342 } 00:04:20.342 } 00:04:20.342 ] 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "vmd", 00:04:20.342 "config": [] 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "accel", 00:04:20.342 "config": [ 00:04:20.342 { 00:04:20.342 "method": "accel_set_options", 00:04:20.342 "params": { 00:04:20.342 "small_cache_size": 128, 00:04:20.342 "large_cache_size": 16, 00:04:20.342 "task_count": 2048, 00:04:20.342 "sequence_count": 2048, 00:04:20.342 "buf_count": 2048 00:04:20.342 } 00:04:20.342 } 00:04:20.342 ] 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "subsystem": "bdev", 00:04:20.342 "config": [ 00:04:20.342 { 00:04:20.342 "method": "bdev_set_options", 00:04:20.342 "params": { 00:04:20.342 "bdev_io_pool_size": 65535, 00:04:20.342 "bdev_io_cache_size": 256, 00:04:20.342 "bdev_auto_examine": true, 00:04:20.342 "iobuf_small_cache_size": 128, 00:04:20.342 "iobuf_large_cache_size": 16 00:04:20.342 } 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "method": "bdev_raid_set_options", 00:04:20.342 "params": { 00:04:20.342 "process_window_size_kb": 1024 00:04:20.342 } 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "method": "bdev_iscsi_set_options", 00:04:20.342 "params": { 00:04:20.342 "timeout_sec": 30 00:04:20.342 } 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "method": "bdev_nvme_set_options", 00:04:20.342 "params": { 00:04:20.342 "action_on_timeout": "none", 00:04:20.342 "timeout_us": 0, 00:04:20.342 "timeout_admin_us": 0, 00:04:20.342 "keep_alive_timeout_ms": 10000, 00:04:20.342 "arbitration_burst": 0, 00:04:20.342 "low_priority_weight": 0, 00:04:20.342 "medium_priority_weight": 0, 00:04:20.342 "high_priority_weight": 0, 00:04:20.342 "nvme_adminq_poll_period_us": 10000, 00:04:20.342 "nvme_ioq_poll_period_us": 0, 00:04:20.342 "io_queue_requests": 0, 00:04:20.342 "delay_cmd_submit": true, 00:04:20.342 "transport_retry_count": 4, 00:04:20.342 "bdev_retry_count": 3, 00:04:20.342 "transport_ack_timeout": 0, 00:04:20.342 "ctrlr_loss_timeout_sec": 0, 00:04:20.342 "reconnect_delay_sec": 0, 00:04:20.342 "fast_io_fail_timeout_sec": 0, 00:04:20.342 "disable_auto_failback": false, 00:04:20.342 "generate_uuids": false, 00:04:20.342 "transport_tos": 0, 00:04:20.342 "nvme_error_stat": false, 00:04:20.342 "rdma_srq_size": 0, 00:04:20.342 "io_path_stat": false, 00:04:20.342 "allow_accel_sequence": false, 00:04:20.342 "rdma_max_cq_size": 0, 00:04:20.342 "rdma_cm_event_timeout_ms": 0, 00:04:20.342 "dhchap_digests": [ 00:04:20.342 "sha256", 00:04:20.342 "sha384", 00:04:20.342 "sha512" 00:04:20.342 ], 00:04:20.342 "dhchap_dhgroups": [ 00:04:20.342 "null", 00:04:20.342 "ffdhe2048", 00:04:20.342 "ffdhe3072", 00:04:20.342 "ffdhe4096", 00:04:20.342 "ffdhe6144", 00:04:20.342 "ffdhe8192" 00:04:20.342 ] 00:04:20.342 } 00:04:20.342 }, 00:04:20.342 { 00:04:20.342 "method": "bdev_nvme_set_hotplug", 00:04:20.342 "params": { 00:04:20.342 "period_us": 100000, 00:04:20.342 "enable": false 00:04:20.343 } 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "method": "bdev_wait_for_examine" 00:04:20.343 } 00:04:20.343 ] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "scsi", 00:04:20.343 "config": null 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "scheduler", 00:04:20.343 "config": [ 00:04:20.343 { 00:04:20.343 "method": "framework_set_scheduler", 00:04:20.343 "params": { 00:04:20.343 "name": "static" 00:04:20.343 } 00:04:20.343 } 00:04:20.343 ] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "vhost_scsi", 00:04:20.343 "config": [] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "vhost_blk", 00:04:20.343 "config": [] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "ublk", 00:04:20.343 "config": [] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "nbd", 00:04:20.343 "config": [] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "nvmf", 00:04:20.343 "config": [ 00:04:20.343 { 00:04:20.343 "method": "nvmf_set_config", 00:04:20.343 "params": { 00:04:20.343 "discovery_filter": "match_any", 00:04:20.343 "admin_cmd_passthru": { 00:04:20.343 "identify_ctrlr": false 00:04:20.343 } 00:04:20.343 } 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "method": "nvmf_set_max_subsystems", 00:04:20.343 "params": { 00:04:20.343 "max_subsystems": 1024 00:04:20.343 } 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "method": "nvmf_set_crdt", 00:04:20.343 "params": { 00:04:20.343 "crdt1": 0, 00:04:20.343 "crdt2": 0, 00:04:20.343 "crdt3": 0 00:04:20.343 } 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "method": "nvmf_create_transport", 00:04:20.343 "params": { 00:04:20.343 "trtype": "TCP", 00:04:20.343 "max_queue_depth": 128, 00:04:20.343 "max_io_qpairs_per_ctrlr": 127, 00:04:20.343 "in_capsule_data_size": 4096, 00:04:20.343 "max_io_size": 131072, 00:04:20.343 "io_unit_size": 131072, 00:04:20.343 "max_aq_depth": 128, 00:04:20.343 "num_shared_buffers": 511, 00:04:20.343 "buf_cache_size": 4294967295, 00:04:20.343 "dif_insert_or_strip": false, 00:04:20.343 "zcopy": false, 00:04:20.343 "c2h_success": true, 00:04:20.343 "sock_priority": 0, 00:04:20.343 "abort_timeout_sec": 1, 00:04:20.343 "ack_timeout": 0, 00:04:20.343 "data_wr_pool_size": 0 00:04:20.343 } 00:04:20.343 } 00:04:20.343 ] 00:04:20.343 }, 00:04:20.343 { 00:04:20.343 "subsystem": "iscsi", 00:04:20.343 "config": [ 00:04:20.343 { 00:04:20.343 "method": "iscsi_set_options", 00:04:20.343 "params": { 00:04:20.343 "node_base": "iqn.2016-06.io.spdk", 00:04:20.343 "max_sessions": 128, 00:04:20.343 "max_connections_per_session": 2, 00:04:20.343 "max_queue_depth": 64, 00:04:20.343 "default_time2wait": 2, 00:04:20.343 "default_time2retain": 20, 00:04:20.343 "first_burst_length": 8192, 00:04:20.343 "immediate_data": true, 00:04:20.343 "allow_duplicated_isid": false, 00:04:20.343 "error_recovery_level": 0, 00:04:20.343 "nop_timeout": 60, 00:04:20.343 "nop_in_interval": 30, 00:04:20.343 "disable_chap": false, 00:04:20.343 "require_chap": false, 00:04:20.343 "mutual_chap": false, 00:04:20.343 "chap_group": 0, 00:04:20.343 "max_large_datain_per_connection": 64, 00:04:20.343 "max_r2t_per_connection": 4, 00:04:20.343 "pdu_pool_size": 36864, 00:04:20.343 "immediate_data_pool_size": 16384, 00:04:20.343 "data_out_pool_size": 2048 00:04:20.343 } 00:04:20.343 } 00:04:20.343 ] 00:04:20.343 } 00:04:20.343 ] 00:04:20.343 } 00:04:20.343 16:00:21 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:20.343 16:00:21 -- rpc/skip_rpc.sh@40 -- # killprocess 3282423 00:04:20.343 16:00:21 -- common/autotest_common.sh@936 -- # '[' -z 3282423 ']' 00:04:20.343 16:00:21 -- common/autotest_common.sh@940 -- # kill -0 3282423 00:04:20.343 16:00:21 -- common/autotest_common.sh@941 -- # uname 00:04:20.343 16:00:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:20.343 16:00:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3282423 00:04:20.601 16:00:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.601 16:00:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.601 16:00:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3282423' 00:04:20.601 killing process with pid 3282423 00:04:20.601 16:00:21 -- common/autotest_common.sh@955 -- # kill 3282423 00:04:20.601 16:00:21 -- common/autotest_common.sh@960 -- # wait 3282423 00:04:20.859 16:00:22 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3282563 00:04:20.859 16:00:22 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.859 16:00:22 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:26.129 16:00:27 -- rpc/skip_rpc.sh@50 -- # killprocess 3282563 00:04:26.129 16:00:27 -- common/autotest_common.sh@936 -- # '[' -z 3282563 ']' 00:04:26.129 16:00:27 -- common/autotest_common.sh@940 -- # kill -0 3282563 00:04:26.129 16:00:27 -- common/autotest_common.sh@941 -- # uname 00:04:26.129 16:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.129 16:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3282563 00:04:26.129 16:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.129 16:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.129 16:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3282563' 00:04:26.129 killing process with pid 3282563 00:04:26.129 16:00:27 -- common/autotest_common.sh@955 -- # kill 3282563 00:04:26.129 16:00:27 -- common/autotest_common.sh@960 -- # wait 3282563 00:04:26.387 16:00:27 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.387 16:00:27 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.387 00:04:26.387 real 0m6.648s 00:04:26.387 user 0m6.255s 00:04:26.387 sys 0m0.676s 00:04:26.387 16:00:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.387 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.387 ************************************ 00:04:26.387 END TEST skip_rpc_with_json 00:04:26.387 ************************************ 00:04:26.387 16:00:27 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.387 16:00:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.387 16:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.387 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.645 ************************************ 00:04:26.645 START TEST skip_rpc_with_delay 00:04:26.645 ************************************ 00:04:26.645 16:00:27 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:26.645 16:00:27 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.645 16:00:27 -- common/autotest_common.sh@638 -- # local es=0 00:04:26.645 16:00:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.645 16:00:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.645 16:00:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.645 16:00:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.645 16:00:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.645 16:00:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.646 16:00:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.646 16:00:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.646 16:00:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:26.646 16:00:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.646 [2024-04-24 16:00:27.761264] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.646 [2024-04-24 16:00:27.761385] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:26.646 16:00:27 -- common/autotest_common.sh@641 -- # es=1 00:04:26.646 16:00:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:26.646 16:00:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:26.646 16:00:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:26.646 00:04:26.646 real 0m0.062s 00:04:26.646 user 0m0.037s 00:04:26.646 sys 0m0.024s 00:04:26.646 16:00:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.646 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.646 ************************************ 00:04:26.646 END TEST skip_rpc_with_delay 00:04:26.646 ************************************ 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.646 16:00:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.646 16:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.646 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.646 ************************************ 00:04:26.646 START TEST exit_on_failed_rpc_init 00:04:26.646 ************************************ 00:04:26.646 16:00:27 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3283416 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.646 16:00:27 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3283416 00:04:26.646 16:00:27 -- common/autotest_common.sh@817 -- # '[' -z 3283416 ']' 00:04:26.646 16:00:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.646 16:00:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:26.646 16:00:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.646 16:00:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:26.646 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.904 [2024-04-24 16:00:27.941174] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:26.904 [2024-04-24 16:00:27.941255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283416 ] 00:04:26.904 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.904 [2024-04-24 16:00:27.998026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.904 [2024-04-24 16:00:28.101674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.162 16:00:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:27.162 16:00:28 -- common/autotest_common.sh@850 -- # return 0 00:04:27.162 16:00:28 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.162 16:00:28 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.162 16:00:28 -- common/autotest_common.sh@638 -- # local es=0 00:04:27.162 16:00:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.162 16:00:28 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.162 16:00:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:27.162 16:00:28 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.162 16:00:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:27.162 16:00:28 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.162 16:00:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:27.162 16:00:28 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.162 16:00:28 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.162 16:00:28 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.162 [2024-04-24 16:00:28.415034] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:27.162 [2024-04-24 16:00:28.415126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283432 ] 00:04:27.162 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.420 [2024-04-24 16:00:28.475855] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.420 [2024-04-24 16:00:28.589753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.420 [2024-04-24 16:00:28.589884] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.420 [2024-04-24 16:00:28.589903] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.420 [2024-04-24 16:00:28.589915] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.678 16:00:28 -- common/autotest_common.sh@641 -- # es=234 00:04:27.678 16:00:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:27.678 16:00:28 -- common/autotest_common.sh@650 -- # es=106 00:04:27.678 16:00:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:27.678 16:00:28 -- common/autotest_common.sh@658 -- # es=1 00:04:27.678 16:00:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:27.678 16:00:28 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.678 16:00:28 -- rpc/skip_rpc.sh@70 -- # killprocess 3283416 00:04:27.678 16:00:28 -- common/autotest_common.sh@936 -- # '[' -z 3283416 ']' 00:04:27.678 16:00:28 -- common/autotest_common.sh@940 -- # kill -0 3283416 00:04:27.678 16:00:28 -- common/autotest_common.sh@941 -- # uname 00:04:27.678 16:00:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:27.678 16:00:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3283416 00:04:27.678 16:00:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:27.678 16:00:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:27.678 16:00:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3283416' 00:04:27.678 killing process with pid 3283416 00:04:27.678 16:00:28 -- common/autotest_common.sh@955 -- # kill 3283416 00:04:27.678 16:00:28 -- common/autotest_common.sh@960 -- # wait 3283416 00:04:27.935 00:04:27.935 real 0m1.295s 00:04:27.935 user 0m1.447s 00:04:27.935 sys 0m0.447s 00:04:27.935 16:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.935 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:27.935 ************************************ 00:04:27.935 END TEST exit_on_failed_rpc_init 00:04:27.935 ************************************ 00:04:27.935 16:00:29 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.935 00:04:27.935 real 0m14.043s 00:04:27.935 user 0m13.134s 00:04:27.935 sys 0m1.756s 00:04:27.935 16:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.935 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:27.935 ************************************ 00:04:27.935 END TEST skip_rpc 00:04:27.935 ************************************ 00:04:28.193 16:00:29 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.193 16:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.193 16:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.193 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.193 ************************************ 00:04:28.193 START TEST rpc_client 00:04:28.193 ************************************ 00:04:28.193 16:00:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.193 * Looking for test storage... 00:04:28.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.193 16:00:29 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.193 OK 00:04:28.193 16:00:29 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.193 00:04:28.193 real 0m0.065s 00:04:28.193 user 0m0.025s 00:04:28.193 sys 0m0.045s 00:04:28.193 16:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.193 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.193 ************************************ 00:04:28.193 END TEST rpc_client 00:04:28.193 ************************************ 00:04:28.193 16:00:29 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.193 16:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.193 16:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.193 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.451 ************************************ 00:04:28.451 START TEST json_config 00:04:28.451 ************************************ 00:04:28.451 16:00:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.451 16:00:29 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.451 16:00:29 -- nvmf/common.sh@7 -- # uname -s 00:04:28.451 16:00:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.451 16:00:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.451 16:00:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.451 16:00:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.451 16:00:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.451 16:00:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.452 16:00:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.452 16:00:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.452 16:00:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.452 16:00:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.452 16:00:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:28.452 16:00:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:28.452 16:00:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.452 16:00:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.452 16:00:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.452 16:00:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.452 16:00:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.452 16:00:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.452 16:00:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.452 16:00:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.452 16:00:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.452 16:00:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.452 16:00:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.452 16:00:29 -- paths/export.sh@5 -- # export PATH 00:04:28.452 16:00:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.452 16:00:29 -- nvmf/common.sh@47 -- # : 0 00:04:28.452 16:00:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:28.452 16:00:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:28.452 16:00:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.452 16:00:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.452 16:00:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.452 16:00:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:28.452 16:00:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:28.452 16:00:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:28.452 16:00:29 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.452 16:00:29 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.452 16:00:29 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.452 16:00:29 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.452 16:00:29 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.452 16:00:29 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.452 16:00:29 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.452 16:00:29 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.452 16:00:29 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.452 16:00:29 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.452 16:00:29 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.452 16:00:29 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.452 16:00:29 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.452 16:00:29 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.452 16:00:29 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.452 16:00:29 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:28.452 INFO: JSON configuration test init 00:04:28.452 16:00:29 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:28.452 16:00:29 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:28.452 16:00:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:28.452 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.452 16:00:29 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:28.452 16:00:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:28.452 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.452 16:00:29 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.452 16:00:29 -- json_config/common.sh@9 -- # local app=target 00:04:28.452 16:00:29 -- json_config/common.sh@10 -- # shift 00:04:28.452 16:00:29 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.452 16:00:29 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.452 16:00:29 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.452 16:00:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.452 16:00:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.452 16:00:29 -- json_config/common.sh@22 -- # app_pid["$app"]=3283690 00:04:28.452 16:00:29 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.452 16:00:29 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.452 Waiting for target to run... 00:04:28.452 16:00:29 -- json_config/common.sh@25 -- # waitforlisten 3283690 /var/tmp/spdk_tgt.sock 00:04:28.452 16:00:29 -- common/autotest_common.sh@817 -- # '[' -z 3283690 ']' 00:04:28.452 16:00:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.452 16:00:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.452 16:00:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.452 16:00:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.452 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.452 [2024-04-24 16:00:29.631307] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:28.452 [2024-04-24 16:00:29.631403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283690 ] 00:04:28.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.029 [2024-04-24 16:00:30.133372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.030 [2024-04-24 16:00:30.239144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.603 16:00:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:29.603 16:00:30 -- common/autotest_common.sh@850 -- # return 0 00:04:29.603 16:00:30 -- json_config/common.sh@26 -- # echo '' 00:04:29.603 00:04:29.603 16:00:30 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:29.603 16:00:30 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:29.603 16:00:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.603 16:00:30 -- common/autotest_common.sh@10 -- # set +x 00:04:29.603 16:00:30 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:29.603 16:00:30 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:29.603 16:00:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:29.603 16:00:30 -- common/autotest_common.sh@10 -- # set +x 00:04:29.603 16:00:30 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.603 16:00:30 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:29.603 16:00:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:32.884 16:00:33 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:32.884 16:00:33 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:32.884 16:00:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:32.884 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:04:32.884 16:00:33 -- json_config/json_config.sh@45 -- # local ret=0 00:04:32.884 16:00:33 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:32.884 16:00:33 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:32.884 16:00:33 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:32.884 16:00:33 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:32.884 16:00:33 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:32.884 16:00:34 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:32.884 16:00:34 -- json_config/json_config.sh@48 -- # local get_types 00:04:32.884 16:00:34 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:32.884 16:00:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:32.884 16:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:32.884 16:00:34 -- json_config/json_config.sh@55 -- # return 0 00:04:32.884 16:00:34 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:32.884 16:00:34 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:32.884 16:00:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:32.884 16:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:32.884 16:00:34 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.884 16:00:34 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:32.884 16:00:34 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.884 16:00:34 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.142 MallocForNvmf0 00:04:33.142 16:00:34 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.142 16:00:34 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.399 MallocForNvmf1 00:04:33.399 16:00:34 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.400 16:00:34 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.657 [2024-04-24 16:00:34.782390] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.657 16:00:34 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.657 16:00:34 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.915 16:00:35 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.915 16:00:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:34.172 16:00:35 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.172 16:00:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.433 16:00:35 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.433 16:00:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.691 [2024-04-24 16:00:35.745579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.691 16:00:35 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:34.691 16:00:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:34.691 16:00:35 -- common/autotest_common.sh@10 -- # set +x 00:04:34.692 16:00:35 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:34.692 16:00:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:34.692 16:00:35 -- common/autotest_common.sh@10 -- # set +x 00:04:34.692 16:00:35 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:34.692 16:00:35 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.692 16:00:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.949 MallocBdevForConfigChangeCheck 00:04:34.949 16:00:36 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:34.949 16:00:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:34.949 16:00:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.949 16:00:36 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:34.949 16:00:36 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.207 16:00:36 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:35.207 INFO: shutting down applications... 00:04:35.207 16:00:36 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:35.207 16:00:36 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:35.207 16:00:36 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:35.207 16:00:36 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:37.116 Calling clear_iscsi_subsystem 00:04:37.116 Calling clear_nvmf_subsystem 00:04:37.117 Calling clear_nbd_subsystem 00:04:37.117 Calling clear_ublk_subsystem 00:04:37.117 Calling clear_vhost_blk_subsystem 00:04:37.117 Calling clear_vhost_scsi_subsystem 00:04:37.117 Calling clear_bdev_subsystem 00:04:37.117 16:00:38 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:37.117 16:00:38 -- json_config/json_config.sh@343 -- # count=100 00:04:37.117 16:00:38 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:37.117 16:00:38 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.117 16:00:38 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:37.117 16:00:38 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:37.117 16:00:38 -- json_config/json_config.sh@345 -- # break 00:04:37.117 16:00:38 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:37.117 16:00:38 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:37.117 16:00:38 -- json_config/common.sh@31 -- # local app=target 00:04:37.117 16:00:38 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.117 16:00:38 -- json_config/common.sh@35 -- # [[ -n 3283690 ]] 00:04:37.117 16:00:38 -- json_config/common.sh@38 -- # kill -SIGINT 3283690 00:04:37.117 16:00:38 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.117 16:00:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.117 16:00:38 -- json_config/common.sh@41 -- # kill -0 3283690 00:04:37.117 16:00:38 -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.685 16:00:38 -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.685 16:00:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.685 16:00:38 -- json_config/common.sh@41 -- # kill -0 3283690 00:04:37.685 16:00:38 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.685 16:00:38 -- json_config/common.sh@43 -- # break 00:04:37.685 16:00:38 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.685 16:00:38 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.685 SPDK target shutdown done 00:04:37.685 16:00:38 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:37.685 INFO: relaunching applications... 00:04:37.685 16:00:38 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.685 16:00:38 -- json_config/common.sh@9 -- # local app=target 00:04:37.685 16:00:38 -- json_config/common.sh@10 -- # shift 00:04:37.685 16:00:38 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.685 16:00:38 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.685 16:00:38 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.685 16:00:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.685 16:00:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.685 16:00:38 -- json_config/common.sh@22 -- # app_pid["$app"]=3284890 00:04:37.685 16:00:38 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.685 Waiting for target to run... 00:04:37.685 16:00:38 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.685 16:00:38 -- json_config/common.sh@25 -- # waitforlisten 3284890 /var/tmp/spdk_tgt.sock 00:04:37.685 16:00:38 -- common/autotest_common.sh@817 -- # '[' -z 3284890 ']' 00:04:37.685 16:00:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.685 16:00:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.685 16:00:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.685 16:00:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.685 16:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:37.685 [2024-04-24 16:00:38.949101] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:37.685 [2024-04-24 16:00:38.949205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284890 ] 00:04:37.943 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.200 [2024-04-24 16:00:39.316108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.200 [2024-04-24 16:00:39.399041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.491 [2024-04-24 16:00:42.427719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.491 [2024-04-24 16:00:42.460207] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.491 16:00:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.491 16:00:42 -- common/autotest_common.sh@850 -- # return 0 00:04:41.491 16:00:42 -- json_config/common.sh@26 -- # echo '' 00:04:41.491 00:04:41.491 16:00:42 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:41.491 16:00:42 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:41.491 INFO: Checking if target configuration is the same... 00:04:41.491 16:00:42 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.491 16:00:42 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:41.491 16:00:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.491 + '[' 2 -ne 2 ']' 00:04:41.491 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:41.491 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:41.491 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:41.491 +++ basename /dev/fd/62 00:04:41.491 ++ mktemp /tmp/62.XXX 00:04:41.491 + tmp_file_1=/tmp/62.5Sl 00:04:41.491 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.491 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:41.491 + tmp_file_2=/tmp/spdk_tgt_config.json.73n 00:04:41.491 + ret=0 00:04:41.491 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:41.749 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:41.749 + diff -u /tmp/62.5Sl /tmp/spdk_tgt_config.json.73n 00:04:41.749 + echo 'INFO: JSON config files are the same' 00:04:41.749 INFO: JSON config files are the same 00:04:41.749 + rm /tmp/62.5Sl /tmp/spdk_tgt_config.json.73n 00:04:41.749 + exit 0 00:04:41.749 16:00:42 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:41.749 16:00:42 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:41.749 INFO: changing configuration and checking if this can be detected... 00:04:41.749 16:00:42 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:41.749 16:00:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.006 16:00:43 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.006 16:00:43 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:42.006 16:00:43 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.006 + '[' 2 -ne 2 ']' 00:04:42.006 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.006 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.006 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.006 +++ basename /dev/fd/62 00:04:42.006 ++ mktemp /tmp/62.XXX 00:04:42.006 + tmp_file_1=/tmp/62.1Wt 00:04:42.006 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.006 + tmp_file_2=/tmp/spdk_tgt_config.json.al2 00:04:42.006 + ret=0 00:04:42.006 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.264 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.523 + diff -u /tmp/62.1Wt /tmp/spdk_tgt_config.json.al2 00:04:42.523 + ret=1 00:04:42.523 + echo '=== Start of file: /tmp/62.1Wt ===' 00:04:42.523 + cat /tmp/62.1Wt 00:04:42.523 + echo '=== End of file: /tmp/62.1Wt ===' 00:04:42.523 + echo '' 00:04:42.523 + echo '=== Start of file: /tmp/spdk_tgt_config.json.al2 ===' 00:04:42.523 + cat /tmp/spdk_tgt_config.json.al2 00:04:42.523 + echo '=== End of file: /tmp/spdk_tgt_config.json.al2 ===' 00:04:42.523 + echo '' 00:04:42.523 + rm /tmp/62.1Wt /tmp/spdk_tgt_config.json.al2 00:04:42.523 + exit 1 00:04:42.523 16:00:43 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:42.523 INFO: configuration change detected. 00:04:42.523 16:00:43 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:42.523 16:00:43 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:42.523 16:00:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.523 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.523 16:00:43 -- json_config/json_config.sh@307 -- # local ret=0 00:04:42.523 16:00:43 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:42.523 16:00:43 -- json_config/json_config.sh@317 -- # [[ -n 3284890 ]] 00:04:42.523 16:00:43 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:42.523 16:00:43 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:42.523 16:00:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.523 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.523 16:00:43 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:42.523 16:00:43 -- json_config/json_config.sh@193 -- # uname -s 00:04:42.523 16:00:43 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:42.523 16:00:43 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:42.523 16:00:43 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:42.523 16:00:43 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:42.523 16:00:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:42.523 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.523 16:00:43 -- json_config/json_config.sh@323 -- # killprocess 3284890 00:04:42.523 16:00:43 -- common/autotest_common.sh@936 -- # '[' -z 3284890 ']' 00:04:42.523 16:00:43 -- common/autotest_common.sh@940 -- # kill -0 3284890 00:04:42.523 16:00:43 -- common/autotest_common.sh@941 -- # uname 00:04:42.523 16:00:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.523 16:00:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3284890 00:04:42.523 16:00:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.523 16:00:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.523 16:00:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3284890' 00:04:42.523 killing process with pid 3284890 00:04:42.523 16:00:43 -- common/autotest_common.sh@955 -- # kill 3284890 00:04:42.523 16:00:43 -- common/autotest_common.sh@960 -- # wait 3284890 00:04:44.422 16:00:45 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.422 16:00:45 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:44.422 16:00:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:44.422 16:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.422 16:00:45 -- json_config/json_config.sh@328 -- # return 0 00:04:44.422 16:00:45 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:44.422 INFO: Success 00:04:44.422 00:04:44.422 real 0m15.730s 00:04:44.422 user 0m17.475s 00:04:44.422 sys 0m1.987s 00:04:44.422 16:00:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.422 16:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.422 ************************************ 00:04:44.422 END TEST json_config 00:04:44.422 ************************************ 00:04:44.423 16:00:45 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.423 16:00:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.423 16:00:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.423 16:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.423 ************************************ 00:04:44.423 START TEST json_config_extra_key 00:04:44.423 ************************************ 00:04:44.423 16:00:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.423 16:00:45 -- nvmf/common.sh@7 -- # uname -s 00:04:44.423 16:00:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.423 16:00:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.423 16:00:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.423 16:00:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.423 16:00:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.423 16:00:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.423 16:00:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.423 16:00:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.423 16:00:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.423 16:00:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.423 16:00:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:44.423 16:00:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:44.423 16:00:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.423 16:00:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.423 16:00:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.423 16:00:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.423 16:00:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.423 16:00:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.423 16:00:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.423 16:00:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.423 16:00:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.423 16:00:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.423 16:00:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.423 16:00:45 -- paths/export.sh@5 -- # export PATH 00:04:44.423 16:00:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.423 16:00:45 -- nvmf/common.sh@47 -- # : 0 00:04:44.423 16:00:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:44.423 16:00:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:44.423 16:00:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.423 16:00:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.423 16:00:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.423 16:00:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:44.423 16:00:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:44.423 16:00:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.423 INFO: launching applications... 00:04:44.423 16:00:45 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.423 16:00:45 -- json_config/common.sh@9 -- # local app=target 00:04:44.423 16:00:45 -- json_config/common.sh@10 -- # shift 00:04:44.423 16:00:45 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.423 16:00:45 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.423 16:00:45 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.423 16:00:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.423 16:00:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.423 16:00:45 -- json_config/common.sh@22 -- # app_pid["$app"]=3285801 00:04:44.423 16:00:45 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.423 16:00:45 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.423 Waiting for target to run... 00:04:44.423 16:00:45 -- json_config/common.sh@25 -- # waitforlisten 3285801 /var/tmp/spdk_tgt.sock 00:04:44.423 16:00:45 -- common/autotest_common.sh@817 -- # '[' -z 3285801 ']' 00:04:44.423 16:00:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.423 16:00:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.423 16:00:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.423 16:00:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.423 16:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.423 [2024-04-24 16:00:45.475624] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:44.423 [2024-04-24 16:00:45.475718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285801 ] 00:04:44.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.990 [2024-04-24 16:00:45.987369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.990 [2024-04-24 16:00:46.093181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.249 16:00:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.249 16:00:46 -- common/autotest_common.sh@850 -- # return 0 00:04:45.249 16:00:46 -- json_config/common.sh@26 -- # echo '' 00:04:45.249 00:04:45.249 16:00:46 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.249 INFO: shutting down applications... 00:04:45.249 16:00:46 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.249 16:00:46 -- json_config/common.sh@31 -- # local app=target 00:04:45.249 16:00:46 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.249 16:00:46 -- json_config/common.sh@35 -- # [[ -n 3285801 ]] 00:04:45.249 16:00:46 -- json_config/common.sh@38 -- # kill -SIGINT 3285801 00:04:45.249 16:00:46 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.249 16:00:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.249 16:00:46 -- json_config/common.sh@41 -- # kill -0 3285801 00:04:45.249 16:00:46 -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.814 16:00:46 -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.814 16:00:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.814 16:00:46 -- json_config/common.sh@41 -- # kill -0 3285801 00:04:45.814 16:00:46 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.814 16:00:46 -- json_config/common.sh@43 -- # break 00:04:45.814 16:00:46 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.814 16:00:46 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.814 SPDK target shutdown done 00:04:45.814 16:00:46 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.814 Success 00:04:45.814 00:04:45.814 real 0m1.530s 00:04:45.814 user 0m1.372s 00:04:45.814 sys 0m0.582s 00:04:45.814 16:00:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.814 16:00:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.814 ************************************ 00:04:45.814 END TEST json_config_extra_key 00:04:45.814 ************************************ 00:04:45.814 16:00:46 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.814 16:00:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.814 16:00:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.814 16:00:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.814 ************************************ 00:04:45.814 START TEST alias_rpc 00:04:45.814 ************************************ 00:04:45.814 16:00:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.814 * Looking for test storage... 00:04:45.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:45.814 16:00:47 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.814 16:00:47 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3286109 00:04:45.814 16:00:47 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.814 16:00:47 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3286109 00:04:45.814 16:00:47 -- common/autotest_common.sh@817 -- # '[' -z 3286109 ']' 00:04:45.814 16:00:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.814 16:00:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.814 16:00:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.814 16:00:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.814 16:00:47 -- common/autotest_common.sh@10 -- # set +x 00:04:46.073 [2024-04-24 16:00:47.132292] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:46.073 [2024-04-24 16:00:47.132365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286109 ] 00:04:46.073 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.073 [2024-04-24 16:00:47.192595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.073 [2024-04-24 16:00:47.303893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.332 16:00:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.332 16:00:47 -- common/autotest_common.sh@850 -- # return 0 00:04:46.332 16:00:47 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:46.589 16:00:47 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3286109 00:04:46.589 16:00:47 -- common/autotest_common.sh@936 -- # '[' -z 3286109 ']' 00:04:46.589 16:00:47 -- common/autotest_common.sh@940 -- # kill -0 3286109 00:04:46.589 16:00:47 -- common/autotest_common.sh@941 -- # uname 00:04:46.589 16:00:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.589 16:00:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3286109 00:04:46.589 16:00:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.589 16:00:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.589 16:00:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3286109' 00:04:46.589 killing process with pid 3286109 00:04:46.589 16:00:47 -- common/autotest_common.sh@955 -- # kill 3286109 00:04:46.589 16:00:47 -- common/autotest_common.sh@960 -- # wait 3286109 00:04:47.155 00:04:47.155 real 0m1.284s 00:04:47.155 user 0m1.361s 00:04:47.155 sys 0m0.441s 00:04:47.155 16:00:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.155 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.155 ************************************ 00:04:47.155 END TEST alias_rpc 00:04:47.155 ************************************ 00:04:47.155 16:00:48 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:47.155 16:00:48 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.155 16:00:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.155 16:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.155 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.444 ************************************ 00:04:47.444 START TEST spdkcli_tcp 00:04:47.444 ************************************ 00:04:47.444 16:00:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.444 * Looking for test storage... 00:04:47.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:47.444 16:00:48 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:47.444 16:00:48 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.444 16:00:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.444 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3286309 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.444 16:00:48 -- spdkcli/tcp.sh@27 -- # waitforlisten 3286309 00:04:47.444 16:00:48 -- common/autotest_common.sh@817 -- # '[' -z 3286309 ']' 00:04:47.444 16:00:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.444 16:00:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:47.444 16:00:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.444 16:00:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:47.444 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.444 [2024-04-24 16:00:48.546978] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:47.444 [2024-04-24 16:00:48.547084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286309 ] 00:04:47.444 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.444 [2024-04-24 16:00:48.603364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.444 [2024-04-24 16:00:48.703596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.444 [2024-04-24 16:00:48.703600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.718 16:00:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:47.718 16:00:48 -- common/autotest_common.sh@850 -- # return 0 00:04:47.718 16:00:48 -- spdkcli/tcp.sh@31 -- # socat_pid=3286323 00:04:47.718 16:00:48 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:47.718 16:00:48 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.016 [ 00:04:48.016 "bdev_malloc_delete", 00:04:48.016 "bdev_malloc_create", 00:04:48.016 "bdev_null_resize", 00:04:48.016 "bdev_null_delete", 00:04:48.016 "bdev_null_create", 00:04:48.016 "bdev_nvme_cuse_unregister", 00:04:48.016 "bdev_nvme_cuse_register", 00:04:48.016 "bdev_opal_new_user", 00:04:48.016 "bdev_opal_set_lock_state", 00:04:48.016 "bdev_opal_delete", 00:04:48.016 "bdev_opal_get_info", 00:04:48.016 "bdev_opal_create", 00:04:48.016 "bdev_nvme_opal_revert", 00:04:48.016 "bdev_nvme_opal_init", 00:04:48.016 "bdev_nvme_send_cmd", 00:04:48.016 "bdev_nvme_get_path_iostat", 00:04:48.016 "bdev_nvme_get_mdns_discovery_info", 00:04:48.016 "bdev_nvme_stop_mdns_discovery", 00:04:48.016 "bdev_nvme_start_mdns_discovery", 00:04:48.016 "bdev_nvme_set_multipath_policy", 00:04:48.016 "bdev_nvme_set_preferred_path", 00:04:48.016 "bdev_nvme_get_io_paths", 00:04:48.016 "bdev_nvme_remove_error_injection", 00:04:48.016 "bdev_nvme_add_error_injection", 00:04:48.016 "bdev_nvme_get_discovery_info", 00:04:48.016 "bdev_nvme_stop_discovery", 00:04:48.016 "bdev_nvme_start_discovery", 00:04:48.016 "bdev_nvme_get_controller_health_info", 00:04:48.016 "bdev_nvme_disable_controller", 00:04:48.016 "bdev_nvme_enable_controller", 00:04:48.016 "bdev_nvme_reset_controller", 00:04:48.016 "bdev_nvme_get_transport_statistics", 00:04:48.016 "bdev_nvme_apply_firmware", 00:04:48.016 "bdev_nvme_detach_controller", 00:04:48.016 "bdev_nvme_get_controllers", 00:04:48.016 "bdev_nvme_attach_controller", 00:04:48.016 "bdev_nvme_set_hotplug", 00:04:48.016 "bdev_nvme_set_options", 00:04:48.016 "bdev_passthru_delete", 00:04:48.016 "bdev_passthru_create", 00:04:48.016 "bdev_lvol_grow_lvstore", 00:04:48.016 "bdev_lvol_get_lvols", 00:04:48.016 "bdev_lvol_get_lvstores", 00:04:48.016 "bdev_lvol_delete", 00:04:48.016 "bdev_lvol_set_read_only", 00:04:48.016 "bdev_lvol_resize", 00:04:48.016 "bdev_lvol_decouple_parent", 00:04:48.016 "bdev_lvol_inflate", 00:04:48.016 "bdev_lvol_rename", 00:04:48.016 "bdev_lvol_clone_bdev", 00:04:48.016 "bdev_lvol_clone", 00:04:48.016 "bdev_lvol_snapshot", 00:04:48.016 "bdev_lvol_create", 00:04:48.016 "bdev_lvol_delete_lvstore", 00:04:48.016 "bdev_lvol_rename_lvstore", 00:04:48.016 "bdev_lvol_create_lvstore", 00:04:48.016 "bdev_raid_set_options", 00:04:48.016 "bdev_raid_remove_base_bdev", 00:04:48.016 "bdev_raid_add_base_bdev", 00:04:48.016 "bdev_raid_delete", 00:04:48.016 "bdev_raid_create", 00:04:48.016 "bdev_raid_get_bdevs", 00:04:48.016 "bdev_error_inject_error", 00:04:48.016 "bdev_error_delete", 00:04:48.016 "bdev_error_create", 00:04:48.016 "bdev_split_delete", 00:04:48.016 "bdev_split_create", 00:04:48.016 "bdev_delay_delete", 00:04:48.016 "bdev_delay_create", 00:04:48.016 "bdev_delay_update_latency", 00:04:48.016 "bdev_zone_block_delete", 00:04:48.016 "bdev_zone_block_create", 00:04:48.016 "blobfs_create", 00:04:48.016 "blobfs_detect", 00:04:48.016 "blobfs_set_cache_size", 00:04:48.016 "bdev_aio_delete", 00:04:48.016 "bdev_aio_rescan", 00:04:48.016 "bdev_aio_create", 00:04:48.016 "bdev_ftl_set_property", 00:04:48.016 "bdev_ftl_get_properties", 00:04:48.017 "bdev_ftl_get_stats", 00:04:48.017 "bdev_ftl_unmap", 00:04:48.017 "bdev_ftl_unload", 00:04:48.017 "bdev_ftl_delete", 00:04:48.017 "bdev_ftl_load", 00:04:48.017 "bdev_ftl_create", 00:04:48.017 "bdev_virtio_attach_controller", 00:04:48.017 "bdev_virtio_scsi_get_devices", 00:04:48.017 "bdev_virtio_detach_controller", 00:04:48.017 "bdev_virtio_blk_set_hotplug", 00:04:48.017 "bdev_iscsi_delete", 00:04:48.017 "bdev_iscsi_create", 00:04:48.017 "bdev_iscsi_set_options", 00:04:48.017 "accel_error_inject_error", 00:04:48.017 "ioat_scan_accel_module", 00:04:48.017 "dsa_scan_accel_module", 00:04:48.017 "iaa_scan_accel_module", 00:04:48.017 "vfu_virtio_create_scsi_endpoint", 00:04:48.017 "vfu_virtio_scsi_remove_target", 00:04:48.017 "vfu_virtio_scsi_add_target", 00:04:48.017 "vfu_virtio_create_blk_endpoint", 00:04:48.017 "vfu_virtio_delete_endpoint", 00:04:48.017 "keyring_file_remove_key", 00:04:48.017 "keyring_file_add_key", 00:04:48.017 "iscsi_get_histogram", 00:04:48.017 "iscsi_enable_histogram", 00:04:48.017 "iscsi_set_options", 00:04:48.017 "iscsi_get_auth_groups", 00:04:48.017 "iscsi_auth_group_remove_secret", 00:04:48.017 "iscsi_auth_group_add_secret", 00:04:48.017 "iscsi_delete_auth_group", 00:04:48.017 "iscsi_create_auth_group", 00:04:48.017 "iscsi_set_discovery_auth", 00:04:48.017 "iscsi_get_options", 00:04:48.017 "iscsi_target_node_request_logout", 00:04:48.017 "iscsi_target_node_set_redirect", 00:04:48.017 "iscsi_target_node_set_auth", 00:04:48.017 "iscsi_target_node_add_lun", 00:04:48.017 "iscsi_get_stats", 00:04:48.017 "iscsi_get_connections", 00:04:48.017 "iscsi_portal_group_set_auth", 00:04:48.017 "iscsi_start_portal_group", 00:04:48.017 "iscsi_delete_portal_group", 00:04:48.017 "iscsi_create_portal_group", 00:04:48.017 "iscsi_get_portal_groups", 00:04:48.017 "iscsi_delete_target_node", 00:04:48.017 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.017 "iscsi_target_node_add_pg_ig_maps", 00:04:48.017 "iscsi_create_target_node", 00:04:48.017 "iscsi_get_target_nodes", 00:04:48.017 "iscsi_delete_initiator_group", 00:04:48.017 "iscsi_initiator_group_remove_initiators", 00:04:48.017 "iscsi_initiator_group_add_initiators", 00:04:48.017 "iscsi_create_initiator_group", 00:04:48.017 "iscsi_get_initiator_groups", 00:04:48.017 "nvmf_set_crdt", 00:04:48.017 "nvmf_set_config", 00:04:48.017 "nvmf_set_max_subsystems", 00:04:48.017 "nvmf_subsystem_get_listeners", 00:04:48.017 "nvmf_subsystem_get_qpairs", 00:04:48.017 "nvmf_subsystem_get_controllers", 00:04:48.017 "nvmf_get_stats", 00:04:48.017 "nvmf_get_transports", 00:04:48.017 "nvmf_create_transport", 00:04:48.017 "nvmf_get_targets", 00:04:48.017 "nvmf_delete_target", 00:04:48.017 "nvmf_create_target", 00:04:48.017 "nvmf_subsystem_allow_any_host", 00:04:48.017 "nvmf_subsystem_remove_host", 00:04:48.017 "nvmf_subsystem_add_host", 00:04:48.017 "nvmf_ns_remove_host", 00:04:48.017 "nvmf_ns_add_host", 00:04:48.017 "nvmf_subsystem_remove_ns", 00:04:48.017 "nvmf_subsystem_add_ns", 00:04:48.017 "nvmf_subsystem_listener_set_ana_state", 00:04:48.017 "nvmf_discovery_get_referrals", 00:04:48.017 "nvmf_discovery_remove_referral", 00:04:48.017 "nvmf_discovery_add_referral", 00:04:48.017 "nvmf_subsystem_remove_listener", 00:04:48.017 "nvmf_subsystem_add_listener", 00:04:48.017 "nvmf_delete_subsystem", 00:04:48.017 "nvmf_create_subsystem", 00:04:48.017 "nvmf_get_subsystems", 00:04:48.017 "env_dpdk_get_mem_stats", 00:04:48.017 "nbd_get_disks", 00:04:48.017 "nbd_stop_disk", 00:04:48.017 "nbd_start_disk", 00:04:48.017 "ublk_recover_disk", 00:04:48.017 "ublk_get_disks", 00:04:48.017 "ublk_stop_disk", 00:04:48.017 "ublk_start_disk", 00:04:48.017 "ublk_destroy_target", 00:04:48.017 "ublk_create_target", 00:04:48.017 "virtio_blk_create_transport", 00:04:48.017 "virtio_blk_get_transports", 00:04:48.017 "vhost_controller_set_coalescing", 00:04:48.017 "vhost_get_controllers", 00:04:48.017 "vhost_delete_controller", 00:04:48.017 "vhost_create_blk_controller", 00:04:48.017 "vhost_scsi_controller_remove_target", 00:04:48.017 "vhost_scsi_controller_add_target", 00:04:48.017 "vhost_start_scsi_controller", 00:04:48.017 "vhost_create_scsi_controller", 00:04:48.017 "thread_set_cpumask", 00:04:48.017 "framework_get_scheduler", 00:04:48.017 "framework_set_scheduler", 00:04:48.017 "framework_get_reactors", 00:04:48.017 "thread_get_io_channels", 00:04:48.017 "thread_get_pollers", 00:04:48.017 "thread_get_stats", 00:04:48.017 "framework_monitor_context_switch", 00:04:48.017 "spdk_kill_instance", 00:04:48.017 "log_enable_timestamps", 00:04:48.017 "log_get_flags", 00:04:48.017 "log_clear_flag", 00:04:48.017 "log_set_flag", 00:04:48.017 "log_get_level", 00:04:48.017 "log_set_level", 00:04:48.017 "log_get_print_level", 00:04:48.017 "log_set_print_level", 00:04:48.017 "framework_enable_cpumask_locks", 00:04:48.017 "framework_disable_cpumask_locks", 00:04:48.017 "framework_wait_init", 00:04:48.017 "framework_start_init", 00:04:48.017 "scsi_get_devices", 00:04:48.017 "bdev_get_histogram", 00:04:48.017 "bdev_enable_histogram", 00:04:48.017 "bdev_set_qos_limit", 00:04:48.017 "bdev_set_qd_sampling_period", 00:04:48.017 "bdev_get_bdevs", 00:04:48.017 "bdev_reset_iostat", 00:04:48.017 "bdev_get_iostat", 00:04:48.017 "bdev_examine", 00:04:48.017 "bdev_wait_for_examine", 00:04:48.017 "bdev_set_options", 00:04:48.017 "notify_get_notifications", 00:04:48.017 "notify_get_types", 00:04:48.017 "accel_get_stats", 00:04:48.017 "accel_set_options", 00:04:48.017 "accel_set_driver", 00:04:48.017 "accel_crypto_key_destroy", 00:04:48.017 "accel_crypto_keys_get", 00:04:48.017 "accel_crypto_key_create", 00:04:48.017 "accel_assign_opc", 00:04:48.017 "accel_get_module_info", 00:04:48.017 "accel_get_opc_assignments", 00:04:48.017 "vmd_rescan", 00:04:48.017 "vmd_remove_device", 00:04:48.017 "vmd_enable", 00:04:48.017 "sock_set_default_impl", 00:04:48.017 "sock_impl_set_options", 00:04:48.017 "sock_impl_get_options", 00:04:48.017 "iobuf_get_stats", 00:04:48.017 "iobuf_set_options", 00:04:48.017 "keyring_get_keys", 00:04:48.017 "framework_get_pci_devices", 00:04:48.017 "framework_get_config", 00:04:48.017 "framework_get_subsystems", 00:04:48.017 "vfu_tgt_set_base_path", 00:04:48.017 "trace_get_info", 00:04:48.017 "trace_get_tpoint_group_mask", 00:04:48.017 "trace_disable_tpoint_group", 00:04:48.017 "trace_enable_tpoint_group", 00:04:48.017 "trace_clear_tpoint_mask", 00:04:48.017 "trace_set_tpoint_mask", 00:04:48.017 "spdk_get_version", 00:04:48.017 "rpc_get_methods" 00:04:48.017 ] 00:04:48.017 16:00:49 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.017 16:00:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.017 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.017 16:00:49 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.017 16:00:49 -- spdkcli/tcp.sh@38 -- # killprocess 3286309 00:04:48.017 16:00:49 -- common/autotest_common.sh@936 -- # '[' -z 3286309 ']' 00:04:48.017 16:00:49 -- common/autotest_common.sh@940 -- # kill -0 3286309 00:04:48.017 16:00:49 -- common/autotest_common.sh@941 -- # uname 00:04:48.017 16:00:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.017 16:00:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3286309 00:04:48.017 16:00:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.017 16:00:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.017 16:00:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3286309' 00:04:48.017 killing process with pid 3286309 00:04:48.017 16:00:49 -- common/autotest_common.sh@955 -- # kill 3286309 00:04:48.017 16:00:49 -- common/autotest_common.sh@960 -- # wait 3286309 00:04:48.605 00:04:48.605 real 0m1.285s 00:04:48.605 user 0m2.246s 00:04:48.605 sys 0m0.444s 00:04:48.605 16:00:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.605 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.605 ************************************ 00:04:48.605 END TEST spdkcli_tcp 00:04:48.605 ************************************ 00:04:48.605 16:00:49 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.605 16:00:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.605 16:00:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.605 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.605 ************************************ 00:04:48.605 START TEST dpdk_mem_utility 00:04:48.605 ************************************ 00:04:48.605 16:00:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.605 * Looking for test storage... 00:04:48.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:48.925 16:00:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:48.925 16:00:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3286529 00:04:48.925 16:00:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.925 16:00:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3286529 00:04:48.925 16:00:49 -- common/autotest_common.sh@817 -- # '[' -z 3286529 ']' 00:04:48.925 16:00:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.925 16:00:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.925 16:00:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.925 16:00:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.925 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.925 [2024-04-24 16:00:49.947244] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:48.925 [2024-04-24 16:00:49.947341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286529 ] 00:04:48.925 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.925 [2024-04-24 16:00:50.013846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.925 [2024-04-24 16:00:50.129355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.857 16:00:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.857 16:00:50 -- common/autotest_common.sh@850 -- # return 0 00:04:49.857 16:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.857 16:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.857 16:00:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.857 16:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.857 { 00:04:49.857 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.857 } 00:04:49.857 16:00:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.857 16:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.857 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.857 1 heaps totaling size 814.000000 MiB 00:04:49.857 size: 814.000000 MiB heap id: 0 00:04:49.857 end heaps---------- 00:04:49.857 8 mempools totaling size 598.116089 MiB 00:04:49.857 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.857 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.857 size: 84.521057 MiB name: bdev_io_3286529 00:04:49.857 size: 51.011292 MiB name: evtpool_3286529 00:04:49.857 size: 50.003479 MiB name: msgpool_3286529 00:04:49.857 size: 21.763794 MiB name: PDU_Pool 00:04:49.857 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.857 size: 0.026123 MiB name: Session_Pool 00:04:49.857 end mempools------- 00:04:49.857 6 memzones totaling size 4.142822 MiB 00:04:49.857 size: 1.000366 MiB name: RG_ring_0_3286529 00:04:49.857 size: 1.000366 MiB name: RG_ring_1_3286529 00:04:49.857 size: 1.000366 MiB name: RG_ring_4_3286529 00:04:49.857 size: 1.000366 MiB name: RG_ring_5_3286529 00:04:49.857 size: 0.125366 MiB name: RG_ring_2_3286529 00:04:49.857 size: 0.015991 MiB name: RG_ring_3_3286529 00:04:49.857 end memzones------- 00:04:49.857 16:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.857 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:49.857 list of free elements. size: 12.519348 MiB 00:04:49.857 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:49.857 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:49.857 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:49.857 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:49.857 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:49.857 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:49.857 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:49.857 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:49.857 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:49.857 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:49.857 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:49.857 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:49.857 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:49.857 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:49.857 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:49.857 list of standard malloc elements. size: 199.218079 MiB 00:04:49.857 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:49.857 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:49.857 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.857 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:49.857 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.857 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.857 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:49.857 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.857 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:49.857 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:49.857 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:49.857 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:49.857 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:49.857 list of memzone associated elements. size: 602.262573 MiB 00:04:49.857 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:49.857 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.857 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:49.857 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.857 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:49.857 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3286529_0 00:04:49.857 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:49.857 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3286529_0 00:04:49.857 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:49.857 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3286529_0 00:04:49.857 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:49.857 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.857 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:49.857 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.857 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:49.857 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3286529 00:04:49.857 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:49.857 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3286529 00:04:49.857 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.857 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3286529 00:04:49.857 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:49.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.857 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:49.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.857 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:49.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.857 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:49.857 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.857 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:49.857 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3286529 00:04:49.857 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:49.857 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3286529 00:04:49.857 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:49.857 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3286529 00:04:49.857 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:49.857 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3286529 00:04:49.857 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:49.857 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3286529 00:04:49.857 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:49.857 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.857 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:49.857 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.857 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:49.857 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.857 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:49.857 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3286529 00:04:49.858 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:49.858 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.858 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:49.858 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.858 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:49.858 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3286529 00:04:49.858 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:49.858 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.858 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:49.858 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3286529 00:04:49.858 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:49.858 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3286529 00:04:49.858 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:49.858 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.858 16:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.858 16:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3286529 00:04:49.858 16:00:51 -- common/autotest_common.sh@936 -- # '[' -z 3286529 ']' 00:04:49.858 16:00:51 -- common/autotest_common.sh@940 -- # kill -0 3286529 00:04:49.858 16:00:51 -- common/autotest_common.sh@941 -- # uname 00:04:49.858 16:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.858 16:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3286529 00:04:49.858 16:00:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.858 16:00:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.858 16:00:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3286529' 00:04:49.858 killing process with pid 3286529 00:04:49.858 16:00:51 -- common/autotest_common.sh@955 -- # kill 3286529 00:04:49.858 16:00:51 -- common/autotest_common.sh@960 -- # wait 3286529 00:04:50.422 00:04:50.422 real 0m1.656s 00:04:50.422 user 0m1.838s 00:04:50.422 sys 0m0.448s 00:04:50.422 16:00:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.422 16:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.422 ************************************ 00:04:50.422 END TEST dpdk_mem_utility 00:04:50.422 ************************************ 00:04:50.422 16:00:51 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:50.422 16:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.422 16:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.422 16:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.422 ************************************ 00:04:50.422 START TEST event 00:04:50.422 ************************************ 00:04:50.422 16:00:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:50.422 * Looking for test storage... 00:04:50.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.422 16:00:51 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:50.423 16:00:51 -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.423 16:00:51 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.423 16:00:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:50.423 16:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.423 16:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.680 ************************************ 00:04:50.680 START TEST event_perf 00:04:50.680 ************************************ 00:04:50.680 16:00:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.680 Running I/O for 1 seconds...[2024-04-24 16:00:51.787835] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:50.680 [2024-04-24 16:00:51.787899] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286856 ] 00:04:50.680 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.680 [2024-04-24 16:00:51.846042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.680 [2024-04-24 16:00:51.958210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.680 [2024-04-24 16:00:51.958265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.681 [2024-04-24 16:00:51.958380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.681 [2024-04-24 16:00:51.958383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.051 Running I/O for 1 seconds... 00:04:52.051 lcore 0: 233322 00:04:52.051 lcore 1: 233322 00:04:52.051 lcore 2: 233322 00:04:52.051 lcore 3: 233323 00:04:52.051 done. 00:04:52.051 00:04:52.051 real 0m1.306s 00:04:52.051 user 0m4.215s 00:04:52.051 sys 0m0.086s 00:04:52.051 16:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.051 16:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.051 ************************************ 00:04:52.051 END TEST event_perf 00:04:52.051 ************************************ 00:04:52.051 16:00:53 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.051 16:00:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:52.051 16:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.051 16:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.051 ************************************ 00:04:52.051 START TEST event_reactor 00:04:52.051 ************************************ 00:04:52.051 16:00:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.051 [2024-04-24 16:00:53.220913] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:52.051 [2024-04-24 16:00:53.220974] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287023 ] 00:04:52.051 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.051 [2024-04-24 16:00:53.281586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.309 [2024-04-24 16:00:53.391399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.241 test_start 00:04:53.241 oneshot 00:04:53.241 tick 100 00:04:53.241 tick 100 00:04:53.241 tick 250 00:04:53.241 tick 100 00:04:53.241 tick 100 00:04:53.241 tick 100 00:04:53.241 tick 250 00:04:53.241 tick 500 00:04:53.241 tick 100 00:04:53.241 tick 100 00:04:53.241 tick 250 00:04:53.241 tick 100 00:04:53.241 tick 100 00:04:53.241 test_end 00:04:53.241 00:04:53.241 real 0m1.297s 00:04:53.241 user 0m1.214s 00:04:53.241 sys 0m0.079s 00:04:53.241 16:00:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.241 16:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.241 ************************************ 00:04:53.241 END TEST event_reactor 00:04:53.241 ************************************ 00:04:53.241 16:00:54 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.241 16:00:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:53.241 16:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.241 16:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.499 ************************************ 00:04:53.499 START TEST event_reactor_perf 00:04:53.499 ************************************ 00:04:53.499 16:00:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.499 [2024-04-24 16:00:54.636831] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:53.499 [2024-04-24 16:00:54.636896] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287194 ] 00:04:53.499 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.499 [2024-04-24 16:00:54.697956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.756 [2024-04-24 16:00:54.812572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.689 test_start 00:04:54.689 test_end 00:04:54.689 Performance: 351160 events per second 00:04:54.689 00:04:54.689 real 0m1.304s 00:04:54.689 user 0m1.218s 00:04:54.689 sys 0m0.081s 00:04:54.689 16:00:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.689 16:00:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.689 ************************************ 00:04:54.689 END TEST event_reactor_perf 00:04:54.689 ************************************ 00:04:54.689 16:00:55 -- event/event.sh@49 -- # uname -s 00:04:54.689 16:00:55 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.689 16:00:55 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.689 16:00:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.689 16:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.689 16:00:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.947 ************************************ 00:04:54.948 START TEST event_scheduler 00:04:54.948 ************************************ 00:04:54.948 16:00:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.948 * Looking for test storage... 00:04:54.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:54.948 16:00:56 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.948 16:00:56 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3287498 00:04:54.948 16:00:56 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.948 16:00:56 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.948 16:00:56 -- scheduler/scheduler.sh@37 -- # waitforlisten 3287498 00:04:54.948 16:00:56 -- common/autotest_common.sh@817 -- # '[' -z 3287498 ']' 00:04:54.948 16:00:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.948 16:00:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.948 16:00:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.948 16:00:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.948 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.948 [2024-04-24 16:00:56.146824] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:54.948 [2024-04-24 16:00:56.146915] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287498 ] 00:04:54.948 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.948 [2024-04-24 16:00:56.204429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.206 [2024-04-24 16:00:56.307706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.206 [2024-04-24 16:00:56.307770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.206 [2024-04-24 16:00:56.307837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.206 [2024-04-24 16:00:56.307841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.206 16:00:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.206 16:00:56 -- common/autotest_common.sh@850 -- # return 0 00:04:55.206 16:00:56 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.206 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.206 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.206 POWER: Env isn't set yet! 00:04:55.206 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:55.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:55.206 POWER: Cannot get available frequencies of lcore 0 00:04:55.206 POWER: Attempting to initialise PSTAT power management... 00:04:55.206 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:55.206 POWER: Initialized successfully for lcore 0 power management 00:04:55.206 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:55.206 POWER: Initialized successfully for lcore 1 power management 00:04:55.206 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:55.206 POWER: Initialized successfully for lcore 2 power management 00:04:55.206 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:55.206 POWER: Initialized successfully for lcore 3 power management 00:04:55.206 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.206 16:00:56 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.206 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.206 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.206 [2024-04-24 16:00:56.483148] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.206 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.206 16:00:56 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.206 16:00:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.206 16:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.206 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 ************************************ 00:04:55.464 START TEST scheduler_create_thread 00:04:55.464 ************************************ 00:04:55.464 16:00:56 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 2 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 3 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 4 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 5 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 6 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 7 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 8 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 9 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 10 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.464 16:00:56 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.464 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.464 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.464 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.465 16:00:56 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.465 16:00:56 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.465 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.465 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.465 16:00:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.465 16:00:56 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.465 16:00:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.465 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.030 16:00:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.030 16:00:57 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.030 16:00:57 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.030 16:00:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.030 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:04:57.404 16:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.404 00:04:57.404 real 0m1.753s 00:04:57.404 user 0m0.009s 00:04:57.404 sys 0m0.004s 00:04:57.404 16:00:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.404 16:00:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.404 ************************************ 00:04:57.404 END TEST scheduler_create_thread 00:04:57.404 ************************************ 00:04:57.404 16:00:58 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.404 16:00:58 -- scheduler/scheduler.sh@46 -- # killprocess 3287498 00:04:57.405 16:00:58 -- common/autotest_common.sh@936 -- # '[' -z 3287498 ']' 00:04:57.405 16:00:58 -- common/autotest_common.sh@940 -- # kill -0 3287498 00:04:57.405 16:00:58 -- common/autotest_common.sh@941 -- # uname 00:04:57.405 16:00:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.405 16:00:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3287498 00:04:57.405 16:00:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:57.405 16:00:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:57.405 16:00:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3287498' 00:04:57.405 killing process with pid 3287498 00:04:57.405 16:00:58 -- common/autotest_common.sh@955 -- # kill 3287498 00:04:57.405 16:00:58 -- common/autotest_common.sh@960 -- # wait 3287498 00:04:57.662 [2024-04-24 16:00:58.818938] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.662 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:57.662 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:57.662 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:57.662 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:57.663 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:57.663 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:57.663 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:57.663 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:57.922 00:04:57.922 real 0m3.014s 00:04:57.922 user 0m3.948s 00:04:57.922 sys 0m0.381s 00:04:57.922 16:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.922 16:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 ************************************ 00:04:57.922 END TEST event_scheduler 00:04:57.922 ************************************ 00:04:57.922 16:00:59 -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.922 16:00:59 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.922 16:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.922 16:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.922 16:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 ************************************ 00:04:57.922 START TEST app_repeat 00:04:57.922 ************************************ 00:04:57.922 16:00:59 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:57.922 16:00:59 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.922 16:00:59 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.922 16:00:59 -- event/event.sh@13 -- # local nbd_list 00:04:57.922 16:00:59 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.922 16:00:59 -- event/event.sh@14 -- # local bdev_list 00:04:57.922 16:00:59 -- event/event.sh@15 -- # local repeat_times=4 00:04:57.922 16:00:59 -- event/event.sh@17 -- # modprobe nbd 00:04:57.922 16:00:59 -- event/event.sh@19 -- # repeat_pid=3287842 00:04:57.922 16:00:59 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.922 16:00:59 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.922 16:00:59 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3287842' 00:04:57.922 Process app_repeat pid: 3287842 00:04:57.922 16:00:59 -- event/event.sh@23 -- # for i in {0..2} 00:04:57.922 16:00:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.922 spdk_app_start Round 0 00:04:57.922 16:00:59 -- event/event.sh@25 -- # waitforlisten 3287842 /var/tmp/spdk-nbd.sock 00:04:57.922 16:00:59 -- common/autotest_common.sh@817 -- # '[' -z 3287842 ']' 00:04:57.922 16:00:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.922 16:00:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.922 16:00:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.922 16:00:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.922 16:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.180 [2024-04-24 16:00:59.223839] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:04:58.180 [2024-04-24 16:00:59.223904] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287842 ] 00:04:58.180 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.180 [2024-04-24 16:00:59.287998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.181 [2024-04-24 16:00:59.400203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.181 [2024-04-24 16:00:59.400209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.439 16:00:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.439 16:00:59 -- common/autotest_common.sh@850 -- # return 0 00:04:58.439 16:00:59 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.697 Malloc0 00:04:58.697 16:00:59 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.955 Malloc1 00:04:58.955 16:01:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@12 -- # local i 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.955 16:01:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.213 /dev/nbd0 00:04:59.213 16:01:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.213 16:01:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.213 16:01:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:59.213 16:01:00 -- common/autotest_common.sh@855 -- # local i 00:04:59.213 16:01:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:59.213 16:01:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:59.213 16:01:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:59.213 16:01:00 -- common/autotest_common.sh@859 -- # break 00:04:59.213 16:01:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:59.213 16:01:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:59.213 16:01:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.213 1+0 records in 00:04:59.213 1+0 records out 00:04:59.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017815 s, 23.0 MB/s 00:04:59.213 16:01:00 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.213 16:01:00 -- common/autotest_common.sh@872 -- # size=4096 00:04:59.213 16:01:00 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.213 16:01:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:59.213 16:01:00 -- common/autotest_common.sh@875 -- # return 0 00:04:59.213 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.213 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.213 16:01:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.470 /dev/nbd1 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.470 16:01:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:59.470 16:01:00 -- common/autotest_common.sh@855 -- # local i 00:04:59.470 16:01:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:59.470 16:01:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:59.470 16:01:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:59.470 16:01:00 -- common/autotest_common.sh@859 -- # break 00:04:59.470 16:01:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:59.470 16:01:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:59.470 16:01:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.470 1+0 records in 00:04:59.470 1+0 records out 00:04:59.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248511 s, 16.5 MB/s 00:04:59.470 16:01:00 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.470 16:01:00 -- common/autotest_common.sh@872 -- # size=4096 00:04:59.470 16:01:00 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.470 16:01:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:59.470 16:01:00 -- common/autotest_common.sh@875 -- # return 0 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.470 16:01:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.471 16:01:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.728 { 00:04:59.728 "nbd_device": "/dev/nbd0", 00:04:59.728 "bdev_name": "Malloc0" 00:04:59.728 }, 00:04:59.728 { 00:04:59.728 "nbd_device": "/dev/nbd1", 00:04:59.728 "bdev_name": "Malloc1" 00:04:59.728 } 00:04:59.728 ]' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.728 { 00:04:59.728 "nbd_device": "/dev/nbd0", 00:04:59.728 "bdev_name": "Malloc0" 00:04:59.728 }, 00:04:59.728 { 00:04:59.728 "nbd_device": "/dev/nbd1", 00:04:59.728 "bdev_name": "Malloc1" 00:04:59.728 } 00:04:59.728 ]' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.728 /dev/nbd1' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.728 /dev/nbd1' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.728 256+0 records in 00:04:59.728 256+0 records out 00:04:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516541 s, 203 MB/s 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.728 256+0 records in 00:04:59.728 256+0 records out 00:04:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233285 s, 44.9 MB/s 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.728 256+0 records in 00:04:59.728 256+0 records out 00:04:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245624 s, 42.7 MB/s 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.728 16:01:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@51 -- # local i 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.729 16:01:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@41 -- # break 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.986 16:01:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@41 -- # break 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.244 16:01:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@65 -- # true 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.501 16:01:01 -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.501 16:01:01 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.759 16:01:02 -- event/event.sh@35 -- # sleep 3 00:05:01.016 [2024-04-24 16:01:02.288229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.274 [2024-04-24 16:01:02.398125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.274 [2024-04-24 16:01:02.398127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.274 [2024-04-24 16:01:02.455914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.274 [2024-04-24 16:01:02.455979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.801 16:01:05 -- event/event.sh@23 -- # for i in {0..2} 00:05:03.801 16:01:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.801 spdk_app_start Round 1 00:05:03.801 16:01:05 -- event/event.sh@25 -- # waitforlisten 3287842 /var/tmp/spdk-nbd.sock 00:05:03.801 16:01:05 -- common/autotest_common.sh@817 -- # '[' -z 3287842 ']' 00:05:03.801 16:01:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.801 16:01:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.801 16:01:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.801 16:01:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.801 16:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.059 16:01:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.059 16:01:05 -- common/autotest_common.sh@850 -- # return 0 00:05:04.059 16:01:05 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.317 Malloc0 00:05:04.317 16:01:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.575 Malloc1 00:05:04.575 16:01:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.575 16:01:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@12 -- # local i 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.576 16:01:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.833 /dev/nbd0 00:05:04.833 16:01:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.833 16:01:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.833 16:01:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:04.833 16:01:06 -- common/autotest_common.sh@855 -- # local i 00:05:04.833 16:01:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:04.833 16:01:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:04.833 16:01:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:04.833 16:01:06 -- common/autotest_common.sh@859 -- # break 00:05:04.833 16:01:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:04.833 16:01:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:04.833 16:01:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.833 1+0 records in 00:05:04.833 1+0 records out 00:05:04.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130368 s, 31.4 MB/s 00:05:04.833 16:01:06 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.833 16:01:06 -- common/autotest_common.sh@872 -- # size=4096 00:05:04.833 16:01:06 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.833 16:01:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:04.833 16:01:06 -- common/autotest_common.sh@875 -- # return 0 00:05:04.833 16:01:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.833 16:01:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.833 16:01:06 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.091 /dev/nbd1 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.091 16:01:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:05.091 16:01:06 -- common/autotest_common.sh@855 -- # local i 00:05:05.091 16:01:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:05.091 16:01:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:05.091 16:01:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:05.091 16:01:06 -- common/autotest_common.sh@859 -- # break 00:05:05.091 16:01:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:05.091 16:01:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:05.091 16:01:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.091 1+0 records in 00:05:05.091 1+0 records out 00:05:05.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203452 s, 20.1 MB/s 00:05:05.091 16:01:06 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.091 16:01:06 -- common/autotest_common.sh@872 -- # size=4096 00:05:05.091 16:01:06 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.091 16:01:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:05.091 16:01:06 -- common/autotest_common.sh@875 -- # return 0 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.091 16:01:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.349 { 00:05:05.349 "nbd_device": "/dev/nbd0", 00:05:05.349 "bdev_name": "Malloc0" 00:05:05.349 }, 00:05:05.349 { 00:05:05.349 "nbd_device": "/dev/nbd1", 00:05:05.349 "bdev_name": "Malloc1" 00:05:05.349 } 00:05:05.349 ]' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.349 { 00:05:05.349 "nbd_device": "/dev/nbd0", 00:05:05.349 "bdev_name": "Malloc0" 00:05:05.349 }, 00:05:05.349 { 00:05:05.349 "nbd_device": "/dev/nbd1", 00:05:05.349 "bdev_name": "Malloc1" 00:05:05.349 } 00:05:05.349 ]' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.349 /dev/nbd1' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.349 /dev/nbd1' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.349 256+0 records in 00:05:05.349 256+0 records out 00:05:05.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417909 s, 251 MB/s 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.349 256+0 records in 00:05:05.349 256+0 records out 00:05:05.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242938 s, 43.2 MB/s 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.349 16:01:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.607 256+0 records in 00:05:05.607 256+0 records out 00:05:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023196 s, 45.2 MB/s 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@51 -- # local i 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@41 -- # break 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.607 16:01:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@41 -- # break 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.865 16:01:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.124 16:01:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.124 16:01:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.124 16:01:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@65 -- # true 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.382 16:01:07 -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.382 16:01:07 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.640 16:01:07 -- event/event.sh@35 -- # sleep 3 00:05:06.898 [2024-04-24 16:01:07.949970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.898 [2024-04-24 16:01:08.059210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.898 [2024-04-24 16:01:08.059214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.898 [2024-04-24 16:01:08.122243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.898 [2024-04-24 16:01:08.122327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.422 16:01:10 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.422 16:01:10 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.422 spdk_app_start Round 2 00:05:09.422 16:01:10 -- event/event.sh@25 -- # waitforlisten 3287842 /var/tmp/spdk-nbd.sock 00:05:09.422 16:01:10 -- common/autotest_common.sh@817 -- # '[' -z 3287842 ']' 00:05:09.422 16:01:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.422 16:01:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.422 16:01:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.422 16:01:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.422 16:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.680 16:01:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.680 16:01:10 -- common/autotest_common.sh@850 -- # return 0 00:05:09.680 16:01:10 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.937 Malloc0 00:05:09.937 16:01:11 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.196 Malloc1 00:05:10.196 16:01:11 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@12 -- # local i 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.196 16:01:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.454 /dev/nbd0 00:05:10.454 16:01:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.454 16:01:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.454 16:01:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:10.454 16:01:11 -- common/autotest_common.sh@855 -- # local i 00:05:10.454 16:01:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:10.454 16:01:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:10.454 16:01:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:10.454 16:01:11 -- common/autotest_common.sh@859 -- # break 00:05:10.454 16:01:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:10.454 16:01:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:10.454 16:01:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.454 1+0 records in 00:05:10.454 1+0 records out 00:05:10.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014472 s, 28.3 MB/s 00:05:10.454 16:01:11 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.454 16:01:11 -- common/autotest_common.sh@872 -- # size=4096 00:05:10.454 16:01:11 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.454 16:01:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:10.454 16:01:11 -- common/autotest_common.sh@875 -- # return 0 00:05:10.454 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.454 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.454 16:01:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.735 /dev/nbd1 00:05:10.735 16:01:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.735 16:01:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.735 16:01:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:10.735 16:01:11 -- common/autotest_common.sh@855 -- # local i 00:05:10.735 16:01:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:10.735 16:01:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:10.735 16:01:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:10.736 16:01:11 -- common/autotest_common.sh@859 -- # break 00:05:10.736 16:01:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:10.736 16:01:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:10.736 16:01:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.736 1+0 records in 00:05:10.736 1+0 records out 00:05:10.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212794 s, 19.2 MB/s 00:05:10.736 16:01:11 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.736 16:01:11 -- common/autotest_common.sh@872 -- # size=4096 00:05:10.736 16:01:11 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.736 16:01:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:10.736 16:01:11 -- common/autotest_common.sh@875 -- # return 0 00:05:10.736 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.736 16:01:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.736 16:01:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.736 16:01:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.736 16:01:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.993 { 00:05:10.993 "nbd_device": "/dev/nbd0", 00:05:10.993 "bdev_name": "Malloc0" 00:05:10.993 }, 00:05:10.993 { 00:05:10.993 "nbd_device": "/dev/nbd1", 00:05:10.993 "bdev_name": "Malloc1" 00:05:10.993 } 00:05:10.993 ]' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.993 { 00:05:10.993 "nbd_device": "/dev/nbd0", 00:05:10.993 "bdev_name": "Malloc0" 00:05:10.993 }, 00:05:10.993 { 00:05:10.993 "nbd_device": "/dev/nbd1", 00:05:10.993 "bdev_name": "Malloc1" 00:05:10.993 } 00:05:10.993 ]' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.993 /dev/nbd1' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.993 /dev/nbd1' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.993 256+0 records in 00:05:10.993 256+0 records out 00:05:10.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039477 s, 266 MB/s 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.993 16:01:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.251 256+0 records in 00:05:11.251 256+0 records out 00:05:11.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241775 s, 43.4 MB/s 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.251 256+0 records in 00:05:11.251 256+0 records out 00:05:11.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229959 s, 45.6 MB/s 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.251 16:01:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@41 -- # break 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.508 16:01:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@41 -- # break 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.766 16:01:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@65 -- # true 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.023 16:01:13 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.023 16:01:13 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.282 16:01:13 -- event/event.sh@35 -- # sleep 3 00:05:12.540 [2024-04-24 16:01:13.633900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.540 [2024-04-24 16:01:13.744458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.540 [2024-04-24 16:01:13.744462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.540 [2024-04-24 16:01:13.807131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.540 [2024-04-24 16:01:13.807205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.818 16:01:16 -- event/event.sh@38 -- # waitforlisten 3287842 /var/tmp/spdk-nbd.sock 00:05:15.818 16:01:16 -- common/autotest_common.sh@817 -- # '[' -z 3287842 ']' 00:05:15.818 16:01:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.818 16:01:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.818 16:01:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.818 16:01:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.818 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.818 16:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.818 16:01:16 -- common/autotest_common.sh@850 -- # return 0 00:05:15.818 16:01:16 -- event/event.sh@39 -- # killprocess 3287842 00:05:15.818 16:01:16 -- common/autotest_common.sh@936 -- # '[' -z 3287842 ']' 00:05:15.818 16:01:16 -- common/autotest_common.sh@940 -- # kill -0 3287842 00:05:15.818 16:01:16 -- common/autotest_common.sh@941 -- # uname 00:05:15.818 16:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.818 16:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3287842 00:05:15.818 16:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.818 16:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.818 16:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3287842' 00:05:15.818 killing process with pid 3287842 00:05:15.818 16:01:16 -- common/autotest_common.sh@955 -- # kill 3287842 00:05:15.818 16:01:16 -- common/autotest_common.sh@960 -- # wait 3287842 00:05:15.818 spdk_app_start is called in Round 0. 00:05:15.818 Shutdown signal received, stop current app iteration 00:05:15.818 Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 reinitialization... 00:05:15.818 spdk_app_start is called in Round 1. 00:05:15.818 Shutdown signal received, stop current app iteration 00:05:15.818 Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 reinitialization... 00:05:15.818 spdk_app_start is called in Round 2. 00:05:15.818 Shutdown signal received, stop current app iteration 00:05:15.818 Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 reinitialization... 00:05:15.818 spdk_app_start is called in Round 3. 00:05:15.818 Shutdown signal received, stop current app iteration 00:05:15.818 16:01:16 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.818 16:01:16 -- event/event.sh@42 -- # return 0 00:05:15.818 00:05:15.818 real 0m17.690s 00:05:15.818 user 0m38.591s 00:05:15.818 sys 0m3.227s 00:05:15.818 16:01:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.818 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.818 ************************************ 00:05:15.818 END TEST app_repeat 00:05:15.818 ************************************ 00:05:15.818 16:01:16 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.818 16:01:16 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.818 16:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.818 16:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.818 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.818 ************************************ 00:05:15.818 START TEST cpu_locks 00:05:15.818 ************************************ 00:05:15.818 16:01:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.818 * Looking for test storage... 00:05:15.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.818 16:01:17 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.818 16:01:17 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.818 16:01:17 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.818 16:01:17 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.818 16:01:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.818 16:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.818 16:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.076 ************************************ 00:05:16.076 START TEST default_locks 00:05:16.076 ************************************ 00:05:16.076 16:01:17 -- common/autotest_common.sh@1111 -- # default_locks 00:05:16.076 16:01:17 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3290201 00:05:16.076 16:01:17 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.076 16:01:17 -- event/cpu_locks.sh@47 -- # waitforlisten 3290201 00:05:16.076 16:01:17 -- common/autotest_common.sh@817 -- # '[' -z 3290201 ']' 00:05:16.076 16:01:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.076 16:01:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.076 16:01:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.076 16:01:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.076 16:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.076 [2024-04-24 16:01:17.212821] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:16.076 [2024-04-24 16:01:17.212900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290201 ] 00:05:16.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.076 [2024-04-24 16:01:17.273468] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.334 [2024-04-24 16:01:17.383932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.591 16:01:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.591 16:01:17 -- common/autotest_common.sh@850 -- # return 0 00:05:16.591 16:01:17 -- event/cpu_locks.sh@49 -- # locks_exist 3290201 00:05:16.591 16:01:17 -- event/cpu_locks.sh@22 -- # lslocks -p 3290201 00:05:16.591 16:01:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.849 lslocks: write error 00:05:16.849 16:01:17 -- event/cpu_locks.sh@50 -- # killprocess 3290201 00:05:16.849 16:01:17 -- common/autotest_common.sh@936 -- # '[' -z 3290201 ']' 00:05:16.849 16:01:17 -- common/autotest_common.sh@940 -- # kill -0 3290201 00:05:16.849 16:01:17 -- common/autotest_common.sh@941 -- # uname 00:05:16.849 16:01:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.849 16:01:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3290201 00:05:16.849 16:01:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.849 16:01:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.849 16:01:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3290201' 00:05:16.849 killing process with pid 3290201 00:05:16.849 16:01:17 -- common/autotest_common.sh@955 -- # kill 3290201 00:05:16.849 16:01:17 -- common/autotest_common.sh@960 -- # wait 3290201 00:05:17.414 16:01:18 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3290201 00:05:17.414 16:01:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:17.414 16:01:18 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3290201 00:05:17.414 16:01:18 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:17.414 16:01:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.414 16:01:18 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:17.414 16:01:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.414 16:01:18 -- common/autotest_common.sh@641 -- # waitforlisten 3290201 00:05:17.414 16:01:18 -- common/autotest_common.sh@817 -- # '[' -z 3290201 ']' 00:05:17.414 16:01:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.414 16:01:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.415 16:01:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.415 16:01:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.415 16:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3290201) - No such process 00:05:17.415 ERROR: process (pid: 3290201) is no longer running 00:05:17.415 16:01:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.415 16:01:18 -- common/autotest_common.sh@850 -- # return 1 00:05:17.415 16:01:18 -- common/autotest_common.sh@641 -- # es=1 00:05:17.415 16:01:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:17.415 16:01:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:17.415 16:01:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:17.415 16:01:18 -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.415 16:01:18 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.415 16:01:18 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.415 16:01:18 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.415 00:05:17.415 real 0m1.281s 00:05:17.415 user 0m1.216s 00:05:17.415 sys 0m0.541s 00:05:17.415 16:01:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.415 16:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.415 ************************************ 00:05:17.415 END TEST default_locks 00:05:17.415 ************************************ 00:05:17.415 16:01:18 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.415 16:01:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.415 16:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.415 16:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.415 ************************************ 00:05:17.415 START TEST default_locks_via_rpc 00:05:17.415 ************************************ 00:05:17.415 16:01:18 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:17.415 16:01:18 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3290488 00:05:17.415 16:01:18 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.415 16:01:18 -- event/cpu_locks.sh@63 -- # waitforlisten 3290488 00:05:17.415 16:01:18 -- common/autotest_common.sh@817 -- # '[' -z 3290488 ']' 00:05:17.415 16:01:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.415 16:01:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.415 16:01:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.415 16:01:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.415 16:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.415 [2024-04-24 16:01:18.622240] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:17.415 [2024-04-24 16:01:18.622309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290488 ] 00:05:17.415 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.415 [2024-04-24 16:01:18.682579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.676 [2024-04-24 16:01:18.786281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.959 16:01:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.959 16:01:19 -- common/autotest_common.sh@850 -- # return 0 00:05:17.959 16:01:19 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:17.959 16:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.959 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:17.959 16:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.959 16:01:19 -- event/cpu_locks.sh@67 -- # no_locks 00:05:17.959 16:01:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.959 16:01:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.959 16:01:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.959 16:01:19 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.959 16:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.959 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:17.959 16:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.959 16:01:19 -- event/cpu_locks.sh@71 -- # locks_exist 3290488 00:05:17.959 16:01:19 -- event/cpu_locks.sh@22 -- # lslocks -p 3290488 00:05:17.959 16:01:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.221 16:01:19 -- event/cpu_locks.sh@73 -- # killprocess 3290488 00:05:18.221 16:01:19 -- common/autotest_common.sh@936 -- # '[' -z 3290488 ']' 00:05:18.221 16:01:19 -- common/autotest_common.sh@940 -- # kill -0 3290488 00:05:18.221 16:01:19 -- common/autotest_common.sh@941 -- # uname 00:05:18.221 16:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.221 16:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3290488 00:05:18.221 16:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.222 16:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.222 16:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3290488' 00:05:18.222 killing process with pid 3290488 00:05:18.222 16:01:19 -- common/autotest_common.sh@955 -- # kill 3290488 00:05:18.222 16:01:19 -- common/autotest_common.sh@960 -- # wait 3290488 00:05:18.788 00:05:18.788 real 0m1.234s 00:05:18.788 user 0m1.182s 00:05:18.788 sys 0m0.515s 00:05:18.788 16:01:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.788 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.788 ************************************ 00:05:18.788 END TEST default_locks_via_rpc 00:05:18.788 ************************************ 00:05:18.788 16:01:19 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:18.788 16:01:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.788 16:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.788 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.788 ************************************ 00:05:18.788 START TEST non_locking_app_on_locked_coremask 00:05:18.788 ************************************ 00:05:18.788 16:01:19 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:18.788 16:01:19 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3290668 00:05:18.788 16:01:19 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.788 16:01:19 -- event/cpu_locks.sh@81 -- # waitforlisten 3290668 /var/tmp/spdk.sock 00:05:18.788 16:01:19 -- common/autotest_common.sh@817 -- # '[' -z 3290668 ']' 00:05:18.788 16:01:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.788 16:01:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.788 16:01:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.788 16:01:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.788 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.788 [2024-04-24 16:01:19.982374] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:18.788 [2024-04-24 16:01:19.982476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290668 ] 00:05:18.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.788 [2024-04-24 16:01:20.042941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.047 [2024-04-24 16:01:20.150159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.304 16:01:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.304 16:01:20 -- common/autotest_common.sh@850 -- # return 0 00:05:19.305 16:01:20 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3290676 00:05:19.305 16:01:20 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:19.305 16:01:20 -- event/cpu_locks.sh@85 -- # waitforlisten 3290676 /var/tmp/spdk2.sock 00:05:19.305 16:01:20 -- common/autotest_common.sh@817 -- # '[' -z 3290676 ']' 00:05:19.305 16:01:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.305 16:01:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.305 16:01:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.305 16:01:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.305 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.305 [2024-04-24 16:01:20.454933] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:19.305 [2024-04-24 16:01:20.455043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290676 ] 00:05:19.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.305 [2024-04-24 16:01:20.547730] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.305 [2024-04-24 16:01:20.547769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.563 [2024-04-24 16:01:20.772506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.129 16:01:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.129 16:01:21 -- common/autotest_common.sh@850 -- # return 0 00:05:20.129 16:01:21 -- event/cpu_locks.sh@87 -- # locks_exist 3290668 00:05:20.129 16:01:21 -- event/cpu_locks.sh@22 -- # lslocks -p 3290668 00:05:20.129 16:01:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.695 lslocks: write error 00:05:20.695 16:01:21 -- event/cpu_locks.sh@89 -- # killprocess 3290668 00:05:20.695 16:01:21 -- common/autotest_common.sh@936 -- # '[' -z 3290668 ']' 00:05:20.696 16:01:21 -- common/autotest_common.sh@940 -- # kill -0 3290668 00:05:20.696 16:01:21 -- common/autotest_common.sh@941 -- # uname 00:05:20.696 16:01:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.696 16:01:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3290668 00:05:20.696 16:01:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.696 16:01:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.696 16:01:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3290668' 00:05:20.696 killing process with pid 3290668 00:05:20.696 16:01:21 -- common/autotest_common.sh@955 -- # kill 3290668 00:05:20.696 16:01:21 -- common/autotest_common.sh@960 -- # wait 3290668 00:05:21.630 16:01:22 -- event/cpu_locks.sh@90 -- # killprocess 3290676 00:05:21.630 16:01:22 -- common/autotest_common.sh@936 -- # '[' -z 3290676 ']' 00:05:21.630 16:01:22 -- common/autotest_common.sh@940 -- # kill -0 3290676 00:05:21.630 16:01:22 -- common/autotest_common.sh@941 -- # uname 00:05:21.630 16:01:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.630 16:01:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3290676 00:05:21.630 16:01:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.630 16:01:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.630 16:01:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3290676' 00:05:21.630 killing process with pid 3290676 00:05:21.630 16:01:22 -- common/autotest_common.sh@955 -- # kill 3290676 00:05:21.630 16:01:22 -- common/autotest_common.sh@960 -- # wait 3290676 00:05:21.887 00:05:21.887 real 0m3.232s 00:05:21.887 user 0m3.380s 00:05:21.887 sys 0m1.001s 00:05:21.887 16:01:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.887 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.887 ************************************ 00:05:21.887 END TEST non_locking_app_on_locked_coremask 00:05:21.887 ************************************ 00:05:22.146 16:01:23 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.146 16:01:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.146 16:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.146 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.146 ************************************ 00:05:22.146 START TEST locking_app_on_unlocked_coremask 00:05:22.146 ************************************ 00:05:22.146 16:01:23 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:22.146 16:01:23 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3291108 00:05:22.146 16:01:23 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.146 16:01:23 -- event/cpu_locks.sh@99 -- # waitforlisten 3291108 /var/tmp/spdk.sock 00:05:22.146 16:01:23 -- common/autotest_common.sh@817 -- # '[' -z 3291108 ']' 00:05:22.146 16:01:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.146 16:01:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:22.146 16:01:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.146 16:01:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:22.146 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.146 [2024-04-24 16:01:23.340115] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:22.146 [2024-04-24 16:01:23.340208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291108 ] 00:05:22.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.146 [2024-04-24 16:01:23.401460] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.146 [2024-04-24 16:01:23.401496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.404 [2024-04-24 16:01:23.512315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.337 16:01:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.337 16:01:24 -- common/autotest_common.sh@850 -- # return 0 00:05:23.337 16:01:24 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3291239 00:05:23.337 16:01:24 -- event/cpu_locks.sh@103 -- # waitforlisten 3291239 /var/tmp/spdk2.sock 00:05:23.337 16:01:24 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.337 16:01:24 -- common/autotest_common.sh@817 -- # '[' -z 3291239 ']' 00:05:23.337 16:01:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.337 16:01:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.337 16:01:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.337 16:01:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.337 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.337 [2024-04-24 16:01:24.310963] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:23.337 [2024-04-24 16:01:24.311073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291239 ] 00:05:23.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.337 [2024-04-24 16:01:24.408802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.596 [2024-04-24 16:01:24.634058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.161 16:01:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.161 16:01:25 -- common/autotest_common.sh@850 -- # return 0 00:05:24.161 16:01:25 -- event/cpu_locks.sh@105 -- # locks_exist 3291239 00:05:24.161 16:01:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.161 16:01:25 -- event/cpu_locks.sh@22 -- # lslocks -p 3291239 00:05:24.419 lslocks: write error 00:05:24.419 16:01:25 -- event/cpu_locks.sh@107 -- # killprocess 3291108 00:05:24.419 16:01:25 -- common/autotest_common.sh@936 -- # '[' -z 3291108 ']' 00:05:24.419 16:01:25 -- common/autotest_common.sh@940 -- # kill -0 3291108 00:05:24.419 16:01:25 -- common/autotest_common.sh@941 -- # uname 00:05:24.419 16:01:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.419 16:01:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3291108 00:05:24.419 16:01:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.419 16:01:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.419 16:01:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3291108' 00:05:24.419 killing process with pid 3291108 00:05:24.419 16:01:25 -- common/autotest_common.sh@955 -- # kill 3291108 00:05:24.419 16:01:25 -- common/autotest_common.sh@960 -- # wait 3291108 00:05:25.352 16:01:26 -- event/cpu_locks.sh@108 -- # killprocess 3291239 00:05:25.352 16:01:26 -- common/autotest_common.sh@936 -- # '[' -z 3291239 ']' 00:05:25.352 16:01:26 -- common/autotest_common.sh@940 -- # kill -0 3291239 00:05:25.352 16:01:26 -- common/autotest_common.sh@941 -- # uname 00:05:25.352 16:01:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.353 16:01:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3291239 00:05:25.353 16:01:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.353 16:01:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.353 16:01:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3291239' 00:05:25.353 killing process with pid 3291239 00:05:25.353 16:01:26 -- common/autotest_common.sh@955 -- # kill 3291239 00:05:25.353 16:01:26 -- common/autotest_common.sh@960 -- # wait 3291239 00:05:25.918 00:05:25.918 real 0m3.750s 00:05:25.918 user 0m4.078s 00:05:25.918 sys 0m1.061s 00:05:25.918 16:01:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.918 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.918 ************************************ 00:05:25.918 END TEST locking_app_on_unlocked_coremask 00:05:25.918 ************************************ 00:05:25.918 16:01:27 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:25.918 16:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.918 16:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.918 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.918 ************************************ 00:05:25.918 START TEST locking_app_on_locked_coremask 00:05:25.918 ************************************ 00:05:25.918 16:01:27 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:25.918 16:01:27 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3291561 00:05:25.918 16:01:27 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.918 16:01:27 -- event/cpu_locks.sh@116 -- # waitforlisten 3291561 /var/tmp/spdk.sock 00:05:25.918 16:01:27 -- common/autotest_common.sh@817 -- # '[' -z 3291561 ']' 00:05:25.918 16:01:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.918 16:01:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.918 16:01:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.918 16:01:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.918 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 [2024-04-24 16:01:27.222391] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:26.176 [2024-04-24 16:01:27.222475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291561 ] 00:05:26.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.176 [2024-04-24 16:01:27.283295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.176 [2024-04-24 16:01:27.394655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.109 16:01:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.109 16:01:28 -- common/autotest_common.sh@850 -- # return 0 00:05:27.109 16:01:28 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3291696 00:05:27.109 16:01:28 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.109 16:01:28 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3291696 /var/tmp/spdk2.sock 00:05:27.109 16:01:28 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.109 16:01:28 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3291696 /var/tmp/spdk2.sock 00:05:27.109 16:01:28 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:27.109 16:01:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.109 16:01:28 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:27.109 16:01:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.109 16:01:28 -- common/autotest_common.sh@641 -- # waitforlisten 3291696 /var/tmp/spdk2.sock 00:05:27.109 16:01:28 -- common/autotest_common.sh@817 -- # '[' -z 3291696 ']' 00:05:27.109 16:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.109 16:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.109 16:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.109 16:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.109 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.109 [2024-04-24 16:01:28.191921] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:27.109 [2024-04-24 16:01:28.192000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291696 ] 00:05:27.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.109 [2024-04-24 16:01:28.288536] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3291561 has claimed it. 00:05:27.109 [2024-04-24 16:01:28.288595] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3291696) - No such process 00:05:27.674 ERROR: process (pid: 3291696) is no longer running 00:05:27.674 16:01:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.674 16:01:28 -- common/autotest_common.sh@850 -- # return 1 00:05:27.674 16:01:28 -- common/autotest_common.sh@641 -- # es=1 00:05:27.674 16:01:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:27.674 16:01:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:27.674 16:01:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:27.674 16:01:28 -- event/cpu_locks.sh@122 -- # locks_exist 3291561 00:05:27.674 16:01:28 -- event/cpu_locks.sh@22 -- # lslocks -p 3291561 00:05:27.674 16:01:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.238 lslocks: write error 00:05:28.239 16:01:29 -- event/cpu_locks.sh@124 -- # killprocess 3291561 00:05:28.239 16:01:29 -- common/autotest_common.sh@936 -- # '[' -z 3291561 ']' 00:05:28.239 16:01:29 -- common/autotest_common.sh@940 -- # kill -0 3291561 00:05:28.239 16:01:29 -- common/autotest_common.sh@941 -- # uname 00:05:28.239 16:01:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.239 16:01:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3291561 00:05:28.239 16:01:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.239 16:01:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.239 16:01:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3291561' 00:05:28.239 killing process with pid 3291561 00:05:28.239 16:01:29 -- common/autotest_common.sh@955 -- # kill 3291561 00:05:28.239 16:01:29 -- common/autotest_common.sh@960 -- # wait 3291561 00:05:28.497 00:05:28.497 real 0m2.581s 00:05:28.497 user 0m2.913s 00:05:28.497 sys 0m0.677s 00:05:28.497 16:01:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.497 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.497 ************************************ 00:05:28.497 END TEST locking_app_on_locked_coremask 00:05:28.497 ************************************ 00:05:28.497 16:01:29 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.497 16:01:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.497 16:01:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.497 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.755 ************************************ 00:05:28.755 START TEST locking_overlapped_coremask 00:05:28.755 ************************************ 00:05:28.755 16:01:29 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:28.755 16:01:29 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3291989 00:05:28.755 16:01:29 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.755 16:01:29 -- event/cpu_locks.sh@133 -- # waitforlisten 3291989 /var/tmp/spdk.sock 00:05:28.755 16:01:29 -- common/autotest_common.sh@817 -- # '[' -z 3291989 ']' 00:05:28.755 16:01:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.755 16:01:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.755 16:01:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.755 16:01:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.755 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.755 [2024-04-24 16:01:29.934952] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:28.755 [2024-04-24 16:01:29.935032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291989 ] 00:05:28.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.755 [2024-04-24 16:01:29.997676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.013 [2024-04-24 16:01:30.111862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.013 [2024-04-24 16:01:30.111933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.013 [2024-04-24 16:01:30.111937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.271 16:01:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.271 16:01:30 -- common/autotest_common.sh@850 -- # return 0 00:05:29.271 16:01:30 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3292003 00:05:29.271 16:01:30 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.271 16:01:30 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3292003 /var/tmp/spdk2.sock 00:05:29.271 16:01:30 -- common/autotest_common.sh@638 -- # local es=0 00:05:29.271 16:01:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3292003 /var/tmp/spdk2.sock 00:05:29.271 16:01:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:29.271 16:01:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.271 16:01:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:29.271 16:01:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.271 16:01:30 -- common/autotest_common.sh@641 -- # waitforlisten 3292003 /var/tmp/spdk2.sock 00:05:29.271 16:01:30 -- common/autotest_common.sh@817 -- # '[' -z 3292003 ']' 00:05:29.271 16:01:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.271 16:01:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.271 16:01:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.271 16:01:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.271 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:05:29.271 [2024-04-24 16:01:30.417512] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:29.271 [2024-04-24 16:01:30.417623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292003 ] 00:05:29.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.271 [2024-04-24 16:01:30.508129] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3291989 has claimed it. 00:05:29.271 [2024-04-24 16:01:30.508200] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3292003) - No such process 00:05:29.835 ERROR: process (pid: 3292003) is no longer running 00:05:29.835 16:01:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.835 16:01:31 -- common/autotest_common.sh@850 -- # return 1 00:05:29.835 16:01:31 -- common/autotest_common.sh@641 -- # es=1 00:05:29.835 16:01:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:29.835 16:01:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:29.835 16:01:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:29.835 16:01:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.835 16:01:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.835 16:01:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.835 16:01:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.835 16:01:31 -- event/cpu_locks.sh@141 -- # killprocess 3291989 00:05:29.835 16:01:31 -- common/autotest_common.sh@936 -- # '[' -z 3291989 ']' 00:05:29.835 16:01:31 -- common/autotest_common.sh@940 -- # kill -0 3291989 00:05:29.835 16:01:31 -- common/autotest_common.sh@941 -- # uname 00:05:29.835 16:01:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.835 16:01:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3291989 00:05:30.092 16:01:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.092 16:01:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.092 16:01:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3291989' 00:05:30.092 killing process with pid 3291989 00:05:30.092 16:01:31 -- common/autotest_common.sh@955 -- # kill 3291989 00:05:30.092 16:01:31 -- common/autotest_common.sh@960 -- # wait 3291989 00:05:30.351 00:05:30.351 real 0m1.708s 00:05:30.351 user 0m4.504s 00:05:30.351 sys 0m0.460s 00:05:30.351 16:01:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.351 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 ************************************ 00:05:30.351 END TEST locking_overlapped_coremask 00:05:30.351 ************************************ 00:05:30.351 16:01:31 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.351 16:01:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.351 16:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.351 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.609 ************************************ 00:05:30.609 START TEST locking_overlapped_coremask_via_rpc 00:05:30.609 ************************************ 00:05:30.609 16:01:31 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:30.609 16:01:31 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3292172 00:05:30.609 16:01:31 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.609 16:01:31 -- event/cpu_locks.sh@149 -- # waitforlisten 3292172 /var/tmp/spdk.sock 00:05:30.609 16:01:31 -- common/autotest_common.sh@817 -- # '[' -z 3292172 ']' 00:05:30.609 16:01:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.609 16:01:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.609 16:01:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.609 16:01:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.609 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.609 [2024-04-24 16:01:31.776423] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:30.609 [2024-04-24 16:01:31.776508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292172 ] 00:05:30.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.609 [2024-04-24 16:01:31.837683] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.609 [2024-04-24 16:01:31.837719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.867 [2024-04-24 16:01:31.951772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.867 [2024-04-24 16:01:31.951827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.867 [2024-04-24 16:01:31.951830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.433 16:01:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.433 16:01:32 -- common/autotest_common.sh@850 -- # return 0 00:05:31.433 16:01:32 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3292311 00:05:31.433 16:01:32 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:31.433 16:01:32 -- event/cpu_locks.sh@153 -- # waitforlisten 3292311 /var/tmp/spdk2.sock 00:05:31.433 16:01:32 -- common/autotest_common.sh@817 -- # '[' -z 3292311 ']' 00:05:31.433 16:01:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.433 16:01:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.433 16:01:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.433 16:01:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.433 16:01:32 -- common/autotest_common.sh@10 -- # set +x 00:05:31.691 [2024-04-24 16:01:32.747863] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:31.691 [2024-04-24 16:01:32.747948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292311 ] 00:05:31.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.691 [2024-04-24 16:01:32.835355] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.691 [2024-04-24 16:01:32.835394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.949 [2024-04-24 16:01:33.044968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.949 [2024-04-24 16:01:33.048799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.949 [2024-04-24 16:01:33.048802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.525 16:01:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.525 16:01:33 -- common/autotest_common.sh@850 -- # return 0 00:05:32.525 16:01:33 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.525 16:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.525 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.525 16:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.525 16:01:33 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.525 16:01:33 -- common/autotest_common.sh@638 -- # local es=0 00:05:32.525 16:01:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.525 16:01:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:32.525 16:01:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.525 16:01:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:32.525 16:01:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.525 16:01:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.525 16:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.525 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.525 [2024-04-24 16:01:33.690837] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3292172 has claimed it. 00:05:32.525 request: 00:05:32.525 { 00:05:32.525 "method": "framework_enable_cpumask_locks", 00:05:32.525 "req_id": 1 00:05:32.525 } 00:05:32.525 Got JSON-RPC error response 00:05:32.525 response: 00:05:32.525 { 00:05:32.525 "code": -32603, 00:05:32.525 "message": "Failed to claim CPU core: 2" 00:05:32.525 } 00:05:32.525 16:01:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:32.525 16:01:33 -- common/autotest_common.sh@641 -- # es=1 00:05:32.525 16:01:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:32.525 16:01:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:32.525 16:01:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:32.525 16:01:33 -- event/cpu_locks.sh@158 -- # waitforlisten 3292172 /var/tmp/spdk.sock 00:05:32.525 16:01:33 -- common/autotest_common.sh@817 -- # '[' -z 3292172 ']' 00:05:32.525 16:01:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.525 16:01:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.525 16:01:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.525 16:01:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.525 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.781 16:01:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.781 16:01:33 -- common/autotest_common.sh@850 -- # return 0 00:05:32.781 16:01:33 -- event/cpu_locks.sh@159 -- # waitforlisten 3292311 /var/tmp/spdk2.sock 00:05:32.781 16:01:33 -- common/autotest_common.sh@817 -- # '[' -z 3292311 ']' 00:05:32.781 16:01:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.781 16:01:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.781 16:01:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.781 16:01:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.781 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.039 16:01:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.039 16:01:34 -- common/autotest_common.sh@850 -- # return 0 00:05:33.039 16:01:34 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:33.039 16:01:34 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.039 16:01:34 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.039 16:01:34 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.039 00:05:33.039 real 0m2.475s 00:05:33.039 user 0m1.214s 00:05:33.039 sys 0m0.192s 00:05:33.039 16:01:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.039 16:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.039 ************************************ 00:05:33.039 END TEST locking_overlapped_coremask_via_rpc 00:05:33.039 ************************************ 00:05:33.039 16:01:34 -- event/cpu_locks.sh@174 -- # cleanup 00:05:33.039 16:01:34 -- event/cpu_locks.sh@15 -- # [[ -z 3292172 ]] 00:05:33.039 16:01:34 -- event/cpu_locks.sh@15 -- # killprocess 3292172 00:05:33.039 16:01:34 -- common/autotest_common.sh@936 -- # '[' -z 3292172 ']' 00:05:33.039 16:01:34 -- common/autotest_common.sh@940 -- # kill -0 3292172 00:05:33.039 16:01:34 -- common/autotest_common.sh@941 -- # uname 00:05:33.039 16:01:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.039 16:01:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3292172 00:05:33.039 16:01:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.039 16:01:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.039 16:01:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3292172' 00:05:33.039 killing process with pid 3292172 00:05:33.039 16:01:34 -- common/autotest_common.sh@955 -- # kill 3292172 00:05:33.039 16:01:34 -- common/autotest_common.sh@960 -- # wait 3292172 00:05:33.602 16:01:34 -- event/cpu_locks.sh@16 -- # [[ -z 3292311 ]] 00:05:33.602 16:01:34 -- event/cpu_locks.sh@16 -- # killprocess 3292311 00:05:33.602 16:01:34 -- common/autotest_common.sh@936 -- # '[' -z 3292311 ']' 00:05:33.602 16:01:34 -- common/autotest_common.sh@940 -- # kill -0 3292311 00:05:33.602 16:01:34 -- common/autotest_common.sh@941 -- # uname 00:05:33.602 16:01:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.602 16:01:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3292311 00:05:33.602 16:01:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.602 16:01:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.602 16:01:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3292311' 00:05:33.602 killing process with pid 3292311 00:05:33.602 16:01:34 -- common/autotest_common.sh@955 -- # kill 3292311 00:05:33.602 16:01:34 -- common/autotest_common.sh@960 -- # wait 3292311 00:05:34.168 16:01:35 -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.168 16:01:35 -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.168 16:01:35 -- event/cpu_locks.sh@15 -- # [[ -z 3292172 ]] 00:05:34.168 16:01:35 -- event/cpu_locks.sh@15 -- # killprocess 3292172 00:05:34.168 16:01:35 -- common/autotest_common.sh@936 -- # '[' -z 3292172 ']' 00:05:34.168 16:01:35 -- common/autotest_common.sh@940 -- # kill -0 3292172 00:05:34.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3292172) - No such process 00:05:34.168 16:01:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3292172 is not found' 00:05:34.168 Process with pid 3292172 is not found 00:05:34.168 16:01:35 -- event/cpu_locks.sh@16 -- # [[ -z 3292311 ]] 00:05:34.168 16:01:35 -- event/cpu_locks.sh@16 -- # killprocess 3292311 00:05:34.168 16:01:35 -- common/autotest_common.sh@936 -- # '[' -z 3292311 ']' 00:05:34.168 16:01:35 -- common/autotest_common.sh@940 -- # kill -0 3292311 00:05:34.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3292311) - No such process 00:05:34.168 16:01:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3292311 is not found' 00:05:34.168 Process with pid 3292311 is not found 00:05:34.168 16:01:35 -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.168 00:05:34.168 real 0m18.169s 00:05:34.168 user 0m31.090s 00:05:34.168 sys 0m5.613s 00:05:34.168 16:01:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.168 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.168 ************************************ 00:05:34.168 END TEST cpu_locks 00:05:34.168 ************************************ 00:05:34.168 00:05:34.168 real 0m43.580s 00:05:34.168 user 1m20.571s 00:05:34.168 sys 0m9.924s 00:05:34.168 16:01:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.168 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.168 ************************************ 00:05:34.168 END TEST event 00:05:34.168 ************************************ 00:05:34.168 16:01:35 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:34.168 16:01:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.168 16:01:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.168 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.168 ************************************ 00:05:34.168 START TEST thread 00:05:34.168 ************************************ 00:05:34.168 16:01:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:34.168 * Looking for test storage... 00:05:34.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:34.168 16:01:35 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.168 16:01:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:34.168 16:01:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.168 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.425 ************************************ 00:05:34.425 START TEST thread_poller_perf 00:05:34.425 ************************************ 00:05:34.425 16:01:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.425 [2024-04-24 16:01:35.485443] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:34.426 [2024-04-24 16:01:35.485503] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292811 ] 00:05:34.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.426 [2024-04-24 16:01:35.548464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.426 [2024-04-24 16:01:35.659473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.426 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.798 ====================================== 00:05:35.798 busy:2713515126 (cyc) 00:05:35.798 total_run_count: 292000 00:05:35.798 tsc_hz: 2700000000 (cyc) 00:05:35.798 ====================================== 00:05:35.798 poller_cost: 9292 (cyc), 3441 (nsec) 00:05:35.798 00:05:35.798 real 0m1.315s 00:05:35.798 user 0m1.231s 00:05:35.798 sys 0m0.078s 00:05:35.798 16:01:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.798 16:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.798 ************************************ 00:05:35.798 END TEST thread_poller_perf 00:05:35.798 ************************************ 00:05:35.798 16:01:36 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.798 16:01:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:35.798 16:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.798 16:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.798 ************************************ 00:05:35.798 START TEST thread_poller_perf 00:05:35.798 ************************************ 00:05:35.799 16:01:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.799 [2024-04-24 16:01:36.926443] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:35.799 [2024-04-24 16:01:36.926507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292970 ] 00:05:35.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.799 [2024-04-24 16:01:36.992804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.056 [2024-04-24 16:01:37.105120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.056 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.990 ====================================== 00:05:36.990 busy:2702553080 (cyc) 00:05:36.990 total_run_count: 3725000 00:05:36.990 tsc_hz: 2700000000 (cyc) 00:05:36.990 ====================================== 00:05:36.990 poller_cost: 725 (cyc), 268 (nsec) 00:05:36.990 00:05:36.990 real 0m1.311s 00:05:36.990 user 0m1.215s 00:05:36.990 sys 0m0.090s 00:05:36.990 16:01:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.990 16:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 ************************************ 00:05:36.990 END TEST thread_poller_perf 00:05:36.990 ************************************ 00:05:36.990 16:01:38 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.990 00:05:36.990 real 0m2.924s 00:05:36.990 user 0m2.552s 00:05:36.990 sys 0m0.345s 00:05:36.990 16:01:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.990 16:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 ************************************ 00:05:36.990 END TEST thread 00:05:36.990 ************************************ 00:05:36.990 16:01:38 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:36.990 16:01:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.990 16:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.990 16:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.249 ************************************ 00:05:37.249 START TEST accel 00:05:37.249 ************************************ 00:05:37.249 16:01:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:37.249 * Looking for test storage... 00:05:37.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:37.249 16:01:38 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:37.249 16:01:38 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:37.249 16:01:38 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.249 16:01:38 -- accel/accel.sh@62 -- # spdk_tgt_pid=3293186 00:05:37.249 16:01:38 -- accel/accel.sh@63 -- # waitforlisten 3293186 00:05:37.249 16:01:38 -- common/autotest_common.sh@817 -- # '[' -z 3293186 ']' 00:05:37.249 16:01:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.249 16:01:38 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:37.249 16:01:38 -- accel/accel.sh@61 -- # build_accel_config 00:05:37.249 16:01:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.249 16:01:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.249 16:01:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.249 16:01:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.249 16:01:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.249 16:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.249 16:01:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.249 16:01:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.249 16:01:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.249 16:01:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.249 16:01:38 -- accel/accel.sh@41 -- # jq -r . 00:05:37.249 [2024-04-24 16:01:38.457656] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:37.249 [2024-04-24 16:01:38.457753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293186 ] 00:05:37.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.249 [2024-04-24 16:01:38.517700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.508 [2024-04-24 16:01:38.623881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.766 16:01:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.766 16:01:38 -- common/autotest_common.sh@850 -- # return 0 00:05:37.766 16:01:38 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:37.766 16:01:38 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:37.766 16:01:38 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:37.766 16:01:38 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:37.766 16:01:38 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:37.766 16:01:38 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:37.766 16:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.766 16:01:38 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:37.766 16:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.766 16:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.766 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.766 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.766 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.767 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.767 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.767 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.767 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.767 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.767 16:01:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.767 16:01:38 -- accel/accel.sh@72 -- # IFS== 00:05:37.767 16:01:38 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.767 16:01:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.767 16:01:38 -- accel/accel.sh@75 -- # killprocess 3293186 00:05:37.767 16:01:38 -- common/autotest_common.sh@936 -- # '[' -z 3293186 ']' 00:05:37.767 16:01:38 -- common/autotest_common.sh@940 -- # kill -0 3293186 00:05:37.767 16:01:38 -- common/autotest_common.sh@941 -- # uname 00:05:37.767 16:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.767 16:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3293186 00:05:37.767 16:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.767 16:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.767 16:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3293186' 00:05:37.767 killing process with pid 3293186 00:05:37.767 16:01:38 -- common/autotest_common.sh@955 -- # kill 3293186 00:05:37.767 16:01:38 -- common/autotest_common.sh@960 -- # wait 3293186 00:05:38.333 16:01:39 -- accel/accel.sh@76 -- # trap - ERR 00:05:38.333 16:01:39 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:38.333 16:01:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:38.333 16:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.333 16:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.333 16:01:39 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:38.333 16:01:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:38.333 16:01:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.333 16:01:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.333 16:01:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.333 16:01:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.333 16:01:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.333 16:01:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.333 16:01:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.333 16:01:39 -- accel/accel.sh@41 -- # jq -r . 00:05:38.333 16:01:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.333 16:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.333 16:01:39 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:38.333 16:01:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:38.333 16:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.333 16:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.591 ************************************ 00:05:38.591 START TEST accel_missing_filename 00:05:38.591 ************************************ 00:05:38.591 16:01:39 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:38.591 16:01:39 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.591 16:01:39 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:38.591 16:01:39 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:38.591 16:01:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.591 16:01:39 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:38.591 16:01:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.591 16:01:39 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:38.591 16:01:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:38.591 16:01:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.591 16:01:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.591 16:01:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.591 16:01:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.591 16:01:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.591 16:01:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.591 16:01:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.591 16:01:39 -- accel/accel.sh@41 -- # jq -r . 00:05:38.591 [2024-04-24 16:01:39.650607] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:38.591 [2024-04-24 16:01:39.650669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293368 ] 00:05:38.591 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.591 [2024-04-24 16:01:39.712673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.591 [2024-04-24 16:01:39.826041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.850 [2024-04-24 16:01:39.884688] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.850 [2024-04-24 16:01:39.966692] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:38.850 A filename is required. 00:05:38.850 16:01:40 -- common/autotest_common.sh@641 -- # es=234 00:05:38.850 16:01:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:38.850 16:01:40 -- common/autotest_common.sh@650 -- # es=106 00:05:38.850 16:01:40 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:38.850 16:01:40 -- common/autotest_common.sh@658 -- # es=1 00:05:38.850 16:01:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:38.850 00:05:38.850 real 0m0.456s 00:05:38.850 user 0m0.349s 00:05:38.850 sys 0m0.139s 00:05:38.850 16:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.850 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.850 ************************************ 00:05:38.850 END TEST accel_missing_filename 00:05:38.850 ************************************ 00:05:38.850 16:01:40 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.850 16:01:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:38.850 16:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.850 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.107 ************************************ 00:05:39.107 START TEST accel_compress_verify 00:05:39.107 ************************************ 00:05:39.107 16:01:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.107 16:01:40 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.107 16:01:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.107 16:01:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:39.107 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.107 16:01:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:39.107 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.107 16:01:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.107 16:01:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.107 16:01:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.107 16:01:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.107 16:01:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.108 16:01:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.108 16:01:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.108 16:01:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.108 16:01:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.108 16:01:40 -- accel/accel.sh@41 -- # jq -r . 00:05:39.108 [2024-04-24 16:01:40.224173] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:39.108 [2024-04-24 16:01:40.224237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293516 ] 00:05:39.108 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.108 [2024-04-24 16:01:40.288310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.365 [2024-04-24 16:01:40.400512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.365 [2024-04-24 16:01:40.459280] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.365 [2024-04-24 16:01:40.541873] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:39.365 00:05:39.365 Compression does not support the verify option, aborting. 00:05:39.624 16:01:40 -- common/autotest_common.sh@641 -- # es=161 00:05:39.624 16:01:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.624 16:01:40 -- common/autotest_common.sh@650 -- # es=33 00:05:39.624 16:01:40 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:39.624 16:01:40 -- common/autotest_common.sh@658 -- # es=1 00:05:39.624 16:01:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.624 00:05:39.624 real 0m0.450s 00:05:39.624 user 0m0.344s 00:05:39.624 sys 0m0.141s 00:05:39.624 16:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.624 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.624 ************************************ 00:05:39.624 END TEST accel_compress_verify 00:05:39.624 ************************************ 00:05:39.625 16:01:40 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:39.625 16:01:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:39.625 16:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.625 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.625 ************************************ 00:05:39.625 START TEST accel_wrong_workload 00:05:39.625 ************************************ 00:05:39.625 16:01:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:39.625 16:01:40 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.625 16:01:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:39.625 16:01:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:39.625 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.625 16:01:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:39.625 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.625 16:01:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:39.625 16:01:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:39.625 16:01:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.625 16:01:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.625 16:01:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.625 16:01:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.625 16:01:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.625 16:01:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.625 16:01:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.625 16:01:40 -- accel/accel.sh@41 -- # jq -r . 00:05:39.625 Unsupported workload type: foobar 00:05:39.625 [2024-04-24 16:01:40.784794] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:39.625 accel_perf options: 00:05:39.625 [-h help message] 00:05:39.625 [-q queue depth per core] 00:05:39.625 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.625 [-T number of threads per core 00:05:39.625 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.625 [-t time in seconds] 00:05:39.625 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.625 [ dif_verify, , dif_generate, dif_generate_copy 00:05:39.625 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.625 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.625 [-S for crc32c workload, use this seed value (default 0) 00:05:39.625 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.625 [-f for fill workload, use this BYTE value (default 255) 00:05:39.625 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.625 [-y verify result if this switch is on] 00:05:39.625 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.625 Can be used to spread operations across a wider range of memory. 00:05:39.625 16:01:40 -- common/autotest_common.sh@641 -- # es=1 00:05:39.625 16:01:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.625 16:01:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.625 16:01:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.625 00:05:39.625 real 0m0.021s 00:05:39.625 user 0m0.013s 00:05:39.625 sys 0m0.008s 00:05:39.625 16:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.625 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.625 ************************************ 00:05:39.625 END TEST accel_wrong_workload 00:05:39.625 ************************************ 00:05:39.625 Error: writing output failed: Broken pipe 00:05:39.625 16:01:40 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.625 16:01:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:39.625 16:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.625 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.884 ************************************ 00:05:39.884 START TEST accel_negative_buffers 00:05:39.884 ************************************ 00:05:39.884 16:01:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.884 16:01:40 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.884 16:01:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:39.884 16:01:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:39.884 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.884 16:01:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:39.884 16:01:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.884 16:01:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:39.884 16:01:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:39.884 16:01:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.884 16:01:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.884 16:01:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.884 16:01:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.884 16:01:40 -- accel/accel.sh@41 -- # jq -r . 00:05:39.884 -x option must be non-negative. 00:05:39.884 [2024-04-24 16:01:40.929330] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:39.884 accel_perf options: 00:05:39.884 [-h help message] 00:05:39.884 [-q queue depth per core] 00:05:39.884 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.884 [-T number of threads per core 00:05:39.884 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.884 [-t time in seconds] 00:05:39.884 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.884 [ dif_verify, , dif_generate, dif_generate_copy 00:05:39.884 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.884 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.884 [-S for crc32c workload, use this seed value (default 0) 00:05:39.884 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.884 [-f for fill workload, use this BYTE value (default 255) 00:05:39.884 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.884 [-y verify result if this switch is on] 00:05:39.884 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.884 Can be used to spread operations across a wider range of memory. 00:05:39.884 16:01:40 -- common/autotest_common.sh@641 -- # es=1 00:05:39.884 16:01:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.884 16:01:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.884 16:01:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.884 00:05:39.884 real 0m0.023s 00:05:39.884 user 0m0.012s 00:05:39.884 sys 0m0.011s 00:05:39.884 16:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.884 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.884 ************************************ 00:05:39.884 END TEST accel_negative_buffers 00:05:39.884 ************************************ 00:05:39.884 Error: writing output failed: Broken pipe 00:05:39.884 16:01:40 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:39.884 16:01:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:39.884 16:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.884 16:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.884 ************************************ 00:05:39.884 START TEST accel_crc32c 00:05:39.884 ************************************ 00:05:39.884 16:01:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:39.884 16:01:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.884 16:01:41 -- accel/accel.sh@17 -- # local accel_module 00:05:39.884 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:39.884 16:01:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:39.884 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:39.884 16:01:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:39.884 16:01:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.884 16:01:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.884 16:01:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.884 16:01:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.884 16:01:41 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.884 16:01:41 -- accel/accel.sh@41 -- # jq -r . 00:05:39.884 [2024-04-24 16:01:41.069479] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:39.884 [2024-04-24 16:01:41.069543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293721 ] 00:05:39.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.884 [2024-04-24 16:01:41.133808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.143 [2024-04-24 16:01:41.249543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=0x1 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=crc32c 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=32 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=software 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=32 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=32 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=1 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val=Yes 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.143 16:01:41 -- accel/accel.sh@20 -- # val= 00:05:40.143 16:01:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.143 16:01:41 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.517 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.517 16:01:42 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.517 16:01:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.517 00:05:41.517 real 0m1.468s 00:05:41.517 user 0m1.318s 00:05:41.517 sys 0m0.153s 00:05:41.517 16:01:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.517 16:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:41.517 ************************************ 00:05:41.517 END TEST accel_crc32c 00:05:41.517 ************************************ 00:05:41.517 16:01:42 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:41.517 16:01:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:41.517 16:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.517 16:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:41.517 ************************************ 00:05:41.517 START TEST accel_crc32c_C2 00:05:41.517 ************************************ 00:05:41.517 16:01:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:41.517 16:01:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.517 16:01:42 -- accel/accel.sh@17 -- # local accel_module 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.517 16:01:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:41.517 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.517 16:01:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:41.517 16:01:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.517 16:01:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.517 16:01:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.517 16:01:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.517 16:01:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.517 16:01:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.517 16:01:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.517 16:01:42 -- accel/accel.sh@41 -- # jq -r . 00:05:41.517 [2024-04-24 16:01:42.654822] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:41.517 [2024-04-24 16:01:42.654880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293885 ] 00:05:41.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.517 [2024-04-24 16:01:42.716824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.791 [2024-04-24 16:01:42.830078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=0x1 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=crc32c 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=0 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=software 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=32 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=32 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=1 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val=Yes 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.791 16:01:42 -- accel/accel.sh@20 -- # val= 00:05:41.791 16:01:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.791 16:01:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.190 16:01:44 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:43.190 16:01:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.190 00:05:43.190 real 0m1.452s 00:05:43.190 user 0m1.322s 00:05:43.190 sys 0m0.131s 00:05:43.190 16:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.190 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.190 ************************************ 00:05:43.190 END TEST accel_crc32c_C2 00:05:43.190 ************************************ 00:05:43.190 16:01:44 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:43.190 16:01:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:43.190 16:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.190 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.190 ************************************ 00:05:43.190 START TEST accel_copy 00:05:43.190 ************************************ 00:05:43.190 16:01:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:43.190 16:01:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.190 16:01:44 -- accel/accel.sh@17 -- # local accel_module 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:43.190 16:01:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.190 16:01:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.190 16:01:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.190 16:01:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.190 16:01:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.190 16:01:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.190 16:01:44 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.190 16:01:44 -- accel/accel.sh@41 -- # jq -r . 00:05:43.190 [2024-04-24 16:01:44.219063] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:43.190 [2024-04-24 16:01:44.219124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294061 ] 00:05:43.190 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.190 [2024-04-24 16:01:44.281098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.190 [2024-04-24 16:01:44.393438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=0x1 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=copy 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=software 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=32 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=32 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val=1 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.190 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.190 16:01:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.190 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.191 16:01:44 -- accel/accel.sh@20 -- # val=Yes 00:05:43.191 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.191 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.191 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:43.191 16:01:44 -- accel/accel.sh@20 -- # val= 00:05:43.191 16:01:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # IFS=: 00:05:43.191 16:01:44 -- accel/accel.sh@19 -- # read -r var val 00:05:44.568 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.568 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.568 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.568 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.568 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.568 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.568 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.568 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.569 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.569 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.569 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@20 -- # val= 00:05:44.569 16:01:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.569 16:01:45 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:44.569 16:01:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.569 00:05:44.569 real 0m1.453s 00:05:44.569 user 0m1.316s 00:05:44.569 sys 0m0.138s 00:05:44.569 16:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.569 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 ************************************ 00:05:44.569 END TEST accel_copy 00:05:44.569 ************************************ 00:05:44.569 16:01:45 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.569 16:01:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:44.569 16:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.569 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 ************************************ 00:05:44.569 START TEST accel_fill 00:05:44.569 ************************************ 00:05:44.569 16:01:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.569 16:01:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.569 16:01:45 -- accel/accel.sh@17 -- # local accel_module 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.569 16:01:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.569 16:01:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.569 16:01:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.569 16:01:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.569 16:01:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.569 16:01:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.569 16:01:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.569 16:01:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.569 16:01:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.569 16:01:45 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.569 16:01:45 -- accel/accel.sh@41 -- # jq -r . 00:05:44.569 [2024-04-24 16:01:45.787783] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:44.569 [2024-04-24 16:01:45.787853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294333 ] 00:05:44.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.569 [2024-04-24 16:01:45.849250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.829 [2024-04-24 16:01:45.961658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=0x1 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=fill 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=0x80 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=software 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=64 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=64 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=1 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val=Yes 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:44.829 16:01:46 -- accel/accel.sh@20 -- # val= 00:05:44.829 16:01:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # IFS=: 00:05:44.829 16:01:46 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.245 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.245 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.245 16:01:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.245 16:01:47 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:46.245 16:01:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.245 00:05:46.245 real 0m1.461s 00:05:46.245 user 0m1.317s 00:05:46.245 sys 0m0.146s 00:05:46.245 16:01:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.245 16:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.245 ************************************ 00:05:46.245 END TEST accel_fill 00:05:46.245 ************************************ 00:05:46.245 16:01:47 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:46.246 16:01:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:46.246 16:01:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.246 16:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.246 ************************************ 00:05:46.246 START TEST accel_copy_crc32c 00:05:46.246 ************************************ 00:05:46.246 16:01:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:46.246 16:01:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.246 16:01:47 -- accel/accel.sh@17 -- # local accel_module 00:05:46.246 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.246 16:01:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:46.246 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.246 16:01:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:46.246 16:01:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.246 16:01:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.246 16:01:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.246 16:01:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.246 16:01:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.246 16:01:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.246 16:01:47 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.246 16:01:47 -- accel/accel.sh@41 -- # jq -r . 00:05:46.246 [2024-04-24 16:01:47.366165] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:46.246 [2024-04-24 16:01:47.366230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294503 ] 00:05:46.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.246 [2024-04-24 16:01:47.429311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.543 [2024-04-24 16:01:47.538512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=0x1 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=0 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=software 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=32 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=32 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=1 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val=Yes 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.543 16:01:47 -- accel/accel.sh@20 -- # val= 00:05:46.543 16:01:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.543 16:01:47 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.918 16:01:48 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.918 16:01:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.918 00:05:47.918 real 0m1.457s 00:05:47.918 user 0m1.305s 00:05:47.918 sys 0m0.153s 00:05:47.918 16:01:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.918 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 ************************************ 00:05:47.918 END TEST accel_copy_crc32c 00:05:47.918 ************************************ 00:05:47.918 16:01:48 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.918 16:01:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:47.918 16:01:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.918 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 ************************************ 00:05:47.918 START TEST accel_copy_crc32c_C2 00:05:47.918 ************************************ 00:05:47.918 16:01:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.918 16:01:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.918 16:01:48 -- accel/accel.sh@17 -- # local accel_module 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.918 16:01:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:47.918 16:01:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.918 16:01:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:47.918 16:01:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.918 16:01:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.918 16:01:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.918 16:01:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.918 16:01:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.918 16:01:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.918 16:01:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:47.918 16:01:48 -- accel/accel.sh@41 -- # jq -r . 00:05:47.918 [2024-04-24 16:01:48.936194] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:47.918 [2024-04-24 16:01:48.936257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294685 ] 00:05:47.918 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.918 [2024-04-24 16:01:48.999569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.918 [2024-04-24 16:01:49.111833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.918 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.918 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=0x1 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=0 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=software 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@22 -- # accel_module=software 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=32 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=32 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=1 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val=Yes 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:47.919 16:01:49 -- accel/accel.sh@20 -- # val= 00:05:47.919 16:01:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # IFS=: 00:05:47.919 16:01:49 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.292 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.292 16:01:50 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:49.292 16:01:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.292 00:05:49.292 real 0m1.464s 00:05:49.292 user 0m1.314s 00:05:49.292 sys 0m0.152s 00:05:49.292 16:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.292 16:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:49.292 ************************************ 00:05:49.292 END TEST accel_copy_crc32c_C2 00:05:49.292 ************************************ 00:05:49.292 16:01:50 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:49.292 16:01:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.292 16:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.292 16:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:49.292 ************************************ 00:05:49.292 START TEST accel_dualcast 00:05:49.292 ************************************ 00:05:49.292 16:01:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:49.292 16:01:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.292 16:01:50 -- accel/accel.sh@17 -- # local accel_module 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.292 16:01:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:49.292 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.292 16:01:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:49.292 16:01:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.292 16:01:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.292 16:01:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.292 16:01:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.292 16:01:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.292 16:01:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.292 16:01:50 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.292 16:01:50 -- accel/accel.sh@41 -- # jq -r . 00:05:49.292 [2024-04-24 16:01:50.517003] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:49.292 [2024-04-24 16:01:50.517065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294950 ] 00:05:49.292 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.551 [2024-04-24 16:01:50.582862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.551 [2024-04-24 16:01:50.695990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=0x1 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=dualcast 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=software 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=32 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=32 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=1 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val=Yes 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.551 16:01:50 -- accel/accel.sh@20 -- # val= 00:05:49.551 16:01:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.551 16:01:50 -- accel/accel.sh@19 -- # read -r var val 00:05:50.922 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.922 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.923 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.923 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.923 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.923 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@20 -- # val= 00:05:50.923 16:01:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:51 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.923 16:01:51 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:50.923 16:01:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.923 00:05:50.923 real 0m1.472s 00:05:50.923 user 0m1.329s 00:05:50.923 sys 0m0.144s 00:05:50.923 16:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.923 16:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.923 ************************************ 00:05:50.923 END TEST accel_dualcast 00:05:50.923 ************************************ 00:05:50.923 16:01:51 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:50.923 16:01:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:50.923 16:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.923 16:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.923 ************************************ 00:05:50.923 START TEST accel_compare 00:05:50.923 ************************************ 00:05:50.923 16:01:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:50.923 16:01:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.923 16:01:52 -- accel/accel.sh@17 -- # local accel_module 00:05:50.923 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:50.923 16:01:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:50.923 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:50.923 16:01:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:50.923 16:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.923 16:01:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.923 16:01:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.923 16:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.923 16:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.923 16:01:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.923 16:01:52 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.923 16:01:52 -- accel/accel.sh@41 -- # jq -r . 00:05:50.923 [2024-04-24 16:01:52.113689] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:50.923 [2024-04-24 16:01:52.113772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295118 ] 00:05:50.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.923 [2024-04-24 16:01:52.177631] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.180 [2024-04-24 16:01:52.298491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=0x1 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=compare 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=software 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=32 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=32 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=1 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val=Yes 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.180 16:01:52 -- accel/accel.sh@20 -- # val= 00:05:51.180 16:01:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.180 16:01:52 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.553 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.553 16:01:53 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:52.553 16:01:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.553 00:05:52.553 real 0m1.478s 00:05:52.553 user 0m1.337s 00:05:52.553 sys 0m0.143s 00:05:52.553 16:01:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.553 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.553 ************************************ 00:05:52.553 END TEST accel_compare 00:05:52.553 ************************************ 00:05:52.553 16:01:53 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:52.553 16:01:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.553 16:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.553 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.553 ************************************ 00:05:52.553 START TEST accel_xor 00:05:52.553 ************************************ 00:05:52.553 16:01:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:52.553 16:01:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.553 16:01:53 -- accel/accel.sh@17 -- # local accel_module 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.553 16:01:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:52.553 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.553 16:01:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.553 16:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.553 16:01:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.553 16:01:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.553 16:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.553 16:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.553 16:01:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.553 16:01:53 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.553 16:01:53 -- accel/accel.sh@41 -- # jq -r . 00:05:52.553 [2024-04-24 16:01:53.713306] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:52.553 [2024-04-24 16:01:53.713372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295400 ] 00:05:52.553 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.553 [2024-04-24 16:01:53.779115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.811 [2024-04-24 16:01:53.900808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=0x1 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=xor 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=2 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=software 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=32 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=32 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=1 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val=Yes 00:05:52.811 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.811 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.811 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.812 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.812 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.812 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:52.812 16:01:53 -- accel/accel.sh@20 -- # val= 00:05:52.812 16:01:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.812 16:01:53 -- accel/accel.sh@19 -- # IFS=: 00:05:52.812 16:01:53 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.184 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.184 16:01:55 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.184 16:01:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.184 00:05:54.184 real 0m1.489s 00:05:54.184 user 0m1.339s 00:05:54.184 sys 0m0.152s 00:05:54.184 16:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.184 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 END TEST accel_xor 00:05:54.184 ************************************ 00:05:54.184 16:01:55 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:54.184 16:01:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:54.184 16:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.184 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 START TEST accel_xor 00:05:54.184 ************************************ 00:05:54.184 16:01:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:54.184 16:01:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.184 16:01:55 -- accel/accel.sh@17 -- # local accel_module 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 16:01:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.184 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 16:01:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.184 16:01:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.184 16:01:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.184 16:01:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.184 16:01:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.184 16:01:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.184 16:01:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.184 16:01:55 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.184 16:01:55 -- accel/accel.sh@41 -- # jq -r . 00:05:54.184 [2024-04-24 16:01:55.324066] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:54.184 [2024-04-24 16:01:55.324131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295565 ] 00:05:54.184 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.184 [2024-04-24 16:01:55.385661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.442 [2024-04-24 16:01:55.506269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=0x1 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=xor 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=3 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=software 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@22 -- # accel_module=software 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=32 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=32 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=1 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val=Yes 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:54.442 16:01:55 -- accel/accel.sh@20 -- # val= 00:05:54.442 16:01:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # IFS=: 00:05:54.442 16:01:55 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@20 -- # val= 00:05:55.815 16:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.815 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.815 16:01:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.815 16:01:56 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:55.815 16:01:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.815 00:05:55.815 real 0m1.475s 00:05:55.815 user 0m1.330s 00:05:55.815 sys 0m0.151s 00:05:55.815 16:01:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.815 16:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.815 ************************************ 00:05:55.815 END TEST accel_xor 00:05:55.815 ************************************ 00:05:55.815 16:01:56 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:55.815 16:01:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.815 16:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.815 16:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.815 ************************************ 00:05:55.815 START TEST accel_dif_verify 00:05:55.815 ************************************ 00:05:55.816 16:01:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:55.816 16:01:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.816 16:01:56 -- accel/accel.sh@17 -- # local accel_module 00:05:55.816 16:01:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.816 16:01:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:55.816 16:01:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.816 16:01:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:55.816 16:01:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.816 16:01:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.816 16:01:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.816 16:01:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.816 16:01:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.816 16:01:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.816 16:01:56 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.816 16:01:56 -- accel/accel.sh@41 -- # jq -r . 00:05:55.816 [2024-04-24 16:01:56.929802] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:55.816 [2024-04-24 16:01:56.929866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295784 ] 00:05:55.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.816 [2024-04-24 16:01:56.994426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.073 [2024-04-24 16:01:57.116310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=0x1 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=dif_verify 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=software 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@22 -- # accel_module=software 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=32 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=32 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=1 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val=No 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.073 16:01:57 -- accel/accel.sh@20 -- # val= 00:05:56.073 16:01:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.073 16:01:57 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.445 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.445 16:01:58 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:57.445 16:01:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.445 00:05:57.445 real 0m1.489s 00:05:57.445 user 0m1.339s 00:05:57.445 sys 0m0.154s 00:05:57.445 16:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.445 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.445 ************************************ 00:05:57.445 END TEST accel_dif_verify 00:05:57.445 ************************************ 00:05:57.445 16:01:58 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:57.445 16:01:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:57.445 16:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.445 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.445 ************************************ 00:05:57.445 START TEST accel_dif_generate 00:05:57.445 ************************************ 00:05:57.445 16:01:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:57.445 16:01:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.445 16:01:58 -- accel/accel.sh@17 -- # local accel_module 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.445 16:01:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:57.445 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.445 16:01:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:57.445 16:01:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.445 16:01:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.446 16:01:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.446 16:01:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.446 16:01:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.446 16:01:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.446 16:01:58 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.446 16:01:58 -- accel/accel.sh@41 -- # jq -r . 00:05:57.446 [2024-04-24 16:01:58.536532] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:57.446 [2024-04-24 16:01:58.536598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296011 ] 00:05:57.446 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.446 [2024-04-24 16:01:58.602499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.446 [2024-04-24 16:01:58.723002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=0x1 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=software 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=32 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=32 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=1 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val=No 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.703 16:01:58 -- accel/accel.sh@20 -- # val= 00:05:57.703 16:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.703 16:01:58 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:01:59 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:01:59 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.077 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.077 16:02:00 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:59.077 16:02:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.077 00:05:59.077 real 0m1.484s 00:05:59.077 user 0m1.349s 00:05:59.077 sys 0m0.138s 00:05:59.077 16:02:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.077 16:02:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.077 ************************************ 00:05:59.077 END TEST accel_dif_generate 00:05:59.077 ************************************ 00:05:59.077 16:02:00 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:59.077 16:02:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:59.077 16:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.077 16:02:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.077 ************************************ 00:05:59.077 START TEST accel_dif_generate_copy 00:05:59.077 ************************************ 00:05:59.077 16:02:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:59.077 16:02:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.077 16:02:00 -- accel/accel.sh@17 -- # local accel_module 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.077 16:02:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:59.077 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.077 16:02:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:59.077 16:02:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.077 16:02:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.077 16:02:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.077 16:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.077 16:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.077 16:02:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.077 16:02:00 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.077 16:02:00 -- accel/accel.sh@41 -- # jq -r . 00:05:59.077 [2024-04-24 16:02:00.134626] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:05:59.077 [2024-04-24 16:02:00.134693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296181 ] 00:05:59.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.077 [2024-04-24 16:02:00.195040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.077 [2024-04-24 16:02:00.307290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.334 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.334 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.334 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=0x1 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=software 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=32 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=32 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=1 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val=No 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:05:59.335 16:02:00 -- accel/accel.sh@20 -- # val= 00:05:59.335 16:02:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # IFS=: 00:05:59.335 16:02:00 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.708 16:02:01 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:00.708 16:02:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.708 00:06:00.708 real 0m1.462s 00:06:00.708 user 0m1.333s 00:06:00.708 sys 0m0.130s 00:06:00.708 16:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.708 16:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:00.708 ************************************ 00:06:00.708 END TEST accel_dif_generate_copy 00:06:00.708 ************************************ 00:06:00.708 16:02:01 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:00.708 16:02:01 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.708 16:02:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:00.708 16:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.708 16:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:00.708 ************************************ 00:06:00.708 START TEST accel_comp 00:06:00.708 ************************************ 00:06:00.708 16:02:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.708 16:02:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.708 16:02:01 -- accel/accel.sh@17 -- # local accel_module 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.708 16:02:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.708 16:02:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.708 16:02:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.708 16:02:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.708 16:02:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.708 16:02:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.708 16:02:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.708 16:02:01 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.708 16:02:01 -- accel/accel.sh@41 -- # jq -r . 00:06:00.708 [2024-04-24 16:02:01.711656] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:00.708 [2024-04-24 16:02:01.711717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296461 ] 00:06:00.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.708 [2024-04-24 16:02:01.776965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.708 [2024-04-24 16:02:01.897351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=0x1 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=compress 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=software 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=32 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=32 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=1 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val=No 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:00.708 16:02:01 -- accel/accel.sh@20 -- # val= 00:06:00.708 16:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # IFS=: 00:06:00.708 16:02:01 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.080 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.080 16:02:03 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:02.080 16:02:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.080 00:06:02.080 real 0m1.492s 00:06:02.080 user 0m1.342s 00:06:02.080 sys 0m0.153s 00:06:02.080 16:02:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.080 16:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 ************************************ 00:06:02.080 END TEST accel_comp 00:06:02.080 ************************************ 00:06:02.080 16:02:03 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.080 16:02:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:02.080 16:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.080 16:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 ************************************ 00:06:02.080 START TEST accel_decomp 00:06:02.080 ************************************ 00:06:02.080 16:02:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.080 16:02:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.080 16:02:03 -- accel/accel.sh@17 -- # local accel_module 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.080 16:02:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.080 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.080 16:02:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.080 16:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.080 16:02:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.080 16:02:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.080 16:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.080 16:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.080 16:02:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.080 16:02:03 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.080 16:02:03 -- accel/accel.sh@41 -- # jq -r . 00:06:02.080 [2024-04-24 16:02:03.325456] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:02.080 [2024-04-24 16:02:03.325520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296627 ] 00:06:02.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.339 [2024-04-24 16:02:03.387571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.339 [2024-04-24 16:02:03.505757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=0x1 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=decompress 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=software 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=32 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=32 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=1 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val=Yes 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:02.339 16:02:03 -- accel/accel.sh@20 -- # val= 00:06:02.339 16:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # IFS=: 00:06:02.339 16:02:03 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@20 -- # val= 00:06:03.711 16:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.711 16:02:04 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.711 16:02:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.711 00:06:03.711 real 0m1.478s 00:06:03.711 user 0m1.338s 00:06:03.711 sys 0m0.142s 00:06:03.711 16:02:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.711 16:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.711 ************************************ 00:06:03.711 END TEST accel_decomp 00:06:03.711 ************************************ 00:06:03.711 16:02:04 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.711 16:02:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:03.711 16:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.711 16:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.711 ************************************ 00:06:03.711 START TEST accel_decmop_full 00:06:03.711 ************************************ 00:06:03.711 16:02:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.711 16:02:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.711 16:02:04 -- accel/accel.sh@17 -- # local accel_module 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # IFS=: 00:06:03.711 16:02:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.711 16:02:04 -- accel/accel.sh@19 -- # read -r var val 00:06:03.711 16:02:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.711 16:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.711 16:02:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.711 16:02:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.711 16:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.711 16:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.711 16:02:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.711 16:02:04 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.711 16:02:04 -- accel/accel.sh@41 -- # jq -r . 00:06:03.711 [2024-04-24 16:02:04.915820] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:03.711 [2024-04-24 16:02:04.915886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296869 ] 00:06:03.711 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.711 [2024-04-24 16:02:04.979067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.970 [2024-04-24 16:02:05.100266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=0x1 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=decompress 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=software 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=32 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=32 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=1 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val=Yes 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:03.970 16:02:05 -- accel/accel.sh@20 -- # val= 00:06:03.970 16:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # IFS=: 00:06:03.970 16:02:05 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.342 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.342 16:02:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.342 16:02:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.342 00:06:05.342 real 0m1.502s 00:06:05.342 user 0m1.359s 00:06:05.342 sys 0m0.145s 00:06:05.342 16:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.342 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.342 ************************************ 00:06:05.342 END TEST accel_decmop_full 00:06:05.342 ************************************ 00:06:05.342 16:02:06 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.342 16:02:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:05.342 16:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.342 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.342 ************************************ 00:06:05.342 START TEST accel_decomp_mcore 00:06:05.342 ************************************ 00:06:05.342 16:02:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.342 16:02:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.342 16:02:06 -- accel/accel.sh@17 -- # local accel_module 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.342 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.342 16:02:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.342 16:02:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.342 16:02:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.342 16:02:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.342 16:02:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.342 16:02:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.342 16:02:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.342 16:02:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.342 16:02:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.342 16:02:06 -- accel/accel.sh@41 -- # jq -r . 00:06:05.342 [2024-04-24 16:02:06.538020] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:05.342 [2024-04-24 16:02:06.538081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297075 ] 00:06:05.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.342 [2024-04-24 16:02:06.598645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.601 [2024-04-24 16:02:06.723051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.601 [2024-04-24 16:02:06.723105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.601 [2024-04-24 16:02:06.723154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.601 [2024-04-24 16:02:06.723157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=0xf 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=decompress 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=software 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=32 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=32 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=1 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val=Yes 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:05.601 16:02:06 -- accel/accel.sh@20 -- # val= 00:06:05.601 16:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # IFS=: 00:06:05.601 16:02:06 -- accel/accel.sh@19 -- # read -r var val 00:06:06.975 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.975 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.975 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.975 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.975 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.975 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.975 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.975 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.975 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:06.976 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.976 16:02:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.976 16:02:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.976 00:06:06.976 real 0m1.493s 00:06:06.976 user 0m4.811s 00:06:06.976 sys 0m0.151s 00:06:06.976 16:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.976 16:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.976 ************************************ 00:06:06.976 END TEST accel_decomp_mcore 00:06:06.976 ************************************ 00:06:06.976 16:02:08 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.976 16:02:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:06.976 16:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.976 16:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.976 ************************************ 00:06:06.976 START TEST accel_decomp_full_mcore 00:06:06.976 ************************************ 00:06:06.976 16:02:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.976 16:02:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.976 16:02:08 -- accel/accel.sh@17 -- # local accel_module 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.976 16:02:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.976 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.976 16:02:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.976 16:02:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.976 16:02:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.976 16:02:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.976 16:02:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.976 16:02:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.976 16:02:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.976 16:02:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.976 16:02:08 -- accel/accel.sh@41 -- # jq -r . 00:06:06.976 [2024-04-24 16:02:08.154518] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:06.976 [2024-04-24 16:02:08.154580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297242 ] 00:06:06.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.976 [2024-04-24 16:02:08.217108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.234 [2024-04-24 16:02:08.324083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.234 [2024-04-24 16:02:08.326774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.234 [2024-04-24 16:02:08.326823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.234 [2024-04-24 16:02:08.326827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=0xf 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=decompress 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=software 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=32 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=32 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=1 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.234 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.234 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.234 16:02:08 -- accel/accel.sh@20 -- # val=Yes 00:06:07.235 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.235 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.235 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:07.235 16:02:08 -- accel/accel.sh@20 -- # val= 00:06:07.235 16:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # IFS=: 00:06:07.235 16:02:08 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@20 -- # val= 00:06:08.606 16:02:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.606 16:02:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.606 16:02:09 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.606 16:02:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.606 00:06:08.606 real 0m1.470s 00:06:08.606 user 0m4.782s 00:06:08.606 sys 0m0.143s 00:06:08.606 16:02:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.606 16:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.606 ************************************ 00:06:08.606 END TEST accel_decomp_full_mcore 00:06:08.606 ************************************ 00:06:08.606 16:02:09 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.606 16:02:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:08.606 16:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.606 16:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.606 ************************************ 00:06:08.606 START TEST accel_decomp_mthread 00:06:08.606 ************************************ 00:06:08.606 16:02:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.606 16:02:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.606 16:02:09 -- accel/accel.sh@17 -- # local accel_module 00:06:08.606 16:02:09 -- accel/accel.sh@19 -- # IFS=: 00:06:08.607 16:02:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.607 16:02:09 -- accel/accel.sh@19 -- # read -r var val 00:06:08.607 16:02:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.607 16:02:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.607 16:02:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.607 16:02:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.607 16:02:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.607 16:02:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.607 16:02:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.607 16:02:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.607 16:02:09 -- accel/accel.sh@41 -- # jq -r . 00:06:08.607 [2024-04-24 16:02:09.753914] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:08.607 [2024-04-24 16:02:09.753982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297530 ] 00:06:08.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.607 [2024-04-24 16:02:09.819654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.865 [2024-04-24 16:02:09.940182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=0x1 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=decompress 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=software 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=32 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=32 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=2 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val=Yes 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:08.865 16:02:10 -- accel/accel.sh@20 -- # val= 00:06:08.865 16:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # IFS=: 00:06:08.865 16:02:10 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.238 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.238 16:02:11 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.238 16:02:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.238 00:06:10.238 real 0m1.498s 00:06:10.238 user 0m1.346s 00:06:10.238 sys 0m0.155s 00:06:10.238 16:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.238 16:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 ************************************ 00:06:10.238 END TEST accel_decomp_mthread 00:06:10.238 ************************************ 00:06:10.238 16:02:11 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.238 16:02:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:10.238 16:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.238 16:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 ************************************ 00:06:10.238 START TEST accel_deomp_full_mthread 00:06:10.238 ************************************ 00:06:10.238 16:02:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.238 16:02:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.238 16:02:11 -- accel/accel.sh@17 -- # local accel_module 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.238 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.238 16:02:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.239 16:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.239 16:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.239 16:02:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.239 16:02:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.239 16:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.239 16:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.239 16:02:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.239 16:02:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.239 16:02:11 -- accel/accel.sh@41 -- # jq -r . 00:06:10.239 [2024-04-24 16:02:11.370129] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:10.239 [2024-04-24 16:02:11.370190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297695 ] 00:06:10.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.239 [2024-04-24 16:02:11.433964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.497 [2024-04-24 16:02:11.554133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.497 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.497 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=0x1 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=decompress 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=software 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=32 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=32 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=2 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val=Yes 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.498 16:02:11 -- accel/accel.sh@20 -- # val= 00:06:10.498 16:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.498 16:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.884 16:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.884 16:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.884 16:02:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.884 16:02:12 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.884 16:02:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.884 00:06:11.884 real 0m1.516s 00:06:11.884 user 0m1.371s 00:06:11.884 sys 0m0.146s 00:06:11.884 16:02:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.884 16:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:11.884 ************************************ 00:06:11.884 END TEST accel_deomp_full_mthread 00:06:11.884 ************************************ 00:06:11.884 16:02:12 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:11.884 16:02:12 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.884 16:02:12 -- accel/accel.sh@137 -- # build_accel_config 00:06:11.884 16:02:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.884 16:02:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:11.884 16:02:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.884 16:02:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.884 16:02:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.884 16:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:11.884 16:02:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.884 16:02:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.884 16:02:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.884 16:02:12 -- accel/accel.sh@41 -- # jq -r . 00:06:11.884 ************************************ 00:06:11.884 START TEST accel_dif_functional_tests 00:06:11.884 ************************************ 00:06:11.884 16:02:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.884 [2024-04-24 16:02:13.022981] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:11.884 [2024-04-24 16:02:13.023049] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297977 ] 00:06:11.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.884 [2024-04-24 16:02:13.080728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.142 [2024-04-24 16:02:13.197255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.142 [2024-04-24 16:02:13.197309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.142 [2024-04-24 16:02:13.197313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.142 00:06:12.142 00:06:12.142 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.142 http://cunit.sourceforge.net/ 00:06:12.142 00:06:12.142 00:06:12.142 Suite: accel_dif 00:06:12.142 Test: verify: DIF generated, GUARD check ...passed 00:06:12.142 Test: verify: DIF generated, APPTAG check ...passed 00:06:12.142 Test: verify: DIF generated, REFTAG check ...passed 00:06:12.142 Test: verify: DIF not generated, GUARD check ...[2024-04-24 16:02:13.299597] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:12.142 [2024-04-24 16:02:13.299665] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:12.142 passed 00:06:12.142 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 16:02:13.299709] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:12.142 [2024-04-24 16:02:13.299740] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:12.142 passed 00:06:12.142 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 16:02:13.299792] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:12.142 [2024-04-24 16:02:13.299825] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:12.142 passed 00:06:12.142 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:12.142 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 16:02:13.299897] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:12.142 passed 00:06:12.142 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:12.142 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:12.142 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:12.142 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 16:02:13.300064] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:12.142 passed 00:06:12.142 Test: generate copy: DIF generated, GUARD check ...passed 00:06:12.142 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:12.142 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:12.142 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:12.142 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:12.142 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:12.142 Test: generate copy: iovecs-len validate ...[2024-04-24 16:02:13.300323] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:12.142 passed 00:06:12.142 Test: generate copy: buffer alignment validate ...passed 00:06:12.142 00:06:12.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.142 suites 1 1 n/a 0 0 00:06:12.142 tests 20 20 20 0 0 00:06:12.142 asserts 204 204 204 0 n/a 00:06:12.142 00:06:12.142 Elapsed time = 0.003 seconds 00:06:12.401 00:06:12.401 real 0m0.579s 00:06:12.401 user 0m0.855s 00:06:12.401 sys 0m0.177s 00:06:12.401 16:02:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.401 16:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.401 ************************************ 00:06:12.401 END TEST accel_dif_functional_tests 00:06:12.401 ************************************ 00:06:12.401 00:06:12.401 real 0m35.232s 00:06:12.401 user 0m37.421s 00:06:12.401 sys 0m5.533s 00:06:12.401 16:02:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.401 16:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.401 ************************************ 00:06:12.401 END TEST accel 00:06:12.401 ************************************ 00:06:12.401 16:02:13 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.401 16:02:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.401 16:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.401 16:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.660 ************************************ 00:06:12.660 START TEST accel_rpc 00:06:12.660 ************************************ 00:06:12.660 16:02:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.660 * Looking for test storage... 00:06:12.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:12.660 16:02:13 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.660 16:02:13 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3298057 00:06:12.660 16:02:13 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:12.660 16:02:13 -- accel/accel_rpc.sh@15 -- # waitforlisten 3298057 00:06:12.660 16:02:13 -- common/autotest_common.sh@817 -- # '[' -z 3298057 ']' 00:06:12.660 16:02:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.660 16:02:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.660 16:02:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.660 16:02:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.660 16:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.660 [2024-04-24 16:02:13.813898] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:12.660 [2024-04-24 16:02:13.813998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298057 ] 00:06:12.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.660 [2024-04-24 16:02:13.870571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.918 [2024-04-24 16:02:13.972566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.918 16:02:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.918 16:02:14 -- common/autotest_common.sh@850 -- # return 0 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:12.918 16:02:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.918 16:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.918 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 ************************************ 00:06:12.918 START TEST accel_assign_opcode 00:06:12.918 ************************************ 00:06:12.918 16:02:14 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:12.918 16:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.918 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 [2024-04-24 16:02:14.121440] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:12.918 16:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:12.918 16:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.918 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 [2024-04-24 16:02:14.129443] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:12.918 16:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.918 16:02:14 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:12.918 16:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.918 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.176 16:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.176 16:02:14 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:13.176 16:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.176 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.176 16:02:14 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:13.176 16:02:14 -- accel/accel_rpc.sh@42 -- # grep software 00:06:13.176 16:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.176 software 00:06:13.176 00:06:13.176 real 0m0.301s 00:06:13.176 user 0m0.041s 00:06:13.176 sys 0m0.007s 00:06:13.176 16:02:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.176 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.176 ************************************ 00:06:13.176 END TEST accel_assign_opcode 00:06:13.176 ************************************ 00:06:13.176 16:02:14 -- accel/accel_rpc.sh@55 -- # killprocess 3298057 00:06:13.176 16:02:14 -- common/autotest_common.sh@936 -- # '[' -z 3298057 ']' 00:06:13.176 16:02:14 -- common/autotest_common.sh@940 -- # kill -0 3298057 00:06:13.176 16:02:14 -- common/autotest_common.sh@941 -- # uname 00:06:13.176 16:02:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.176 16:02:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3298057 00:06:13.434 16:02:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.434 16:02:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.434 16:02:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3298057' 00:06:13.434 killing process with pid 3298057 00:06:13.434 16:02:14 -- common/autotest_common.sh@955 -- # kill 3298057 00:06:13.434 16:02:14 -- common/autotest_common.sh@960 -- # wait 3298057 00:06:13.692 00:06:13.692 real 0m1.232s 00:06:13.692 user 0m1.199s 00:06:13.692 sys 0m0.452s 00:06:13.692 16:02:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.692 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.692 ************************************ 00:06:13.692 END TEST accel_rpc 00:06:13.692 ************************************ 00:06:13.692 16:02:14 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.692 16:02:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.692 16:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.692 16:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 ************************************ 00:06:13.951 START TEST app_cmdline 00:06:13.951 ************************************ 00:06:13.951 16:02:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.951 * Looking for test storage... 00:06:13.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.951 16:02:15 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:13.951 16:02:15 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3298394 00:06:13.951 16:02:15 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:13.951 16:02:15 -- app/cmdline.sh@18 -- # waitforlisten 3298394 00:06:13.951 16:02:15 -- common/autotest_common.sh@817 -- # '[' -z 3298394 ']' 00:06:13.951 16:02:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.951 16:02:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.951 16:02:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.951 16:02:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.951 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 [2024-04-24 16:02:15.176891] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:13.951 [2024-04-24 16:02:15.176988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298394 ] 00:06:13.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.209 [2024-04-24 16:02:15.241683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.209 [2024-04-24 16:02:15.346683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.467 16:02:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.467 16:02:15 -- common/autotest_common.sh@850 -- # return 0 00:06:14.467 16:02:15 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.725 { 00:06:14.725 "version": "SPDK v24.05-pre git sha1 77aac3af8", 00:06:14.725 "fields": { 00:06:14.725 "major": 24, 00:06:14.725 "minor": 5, 00:06:14.725 "patch": 0, 00:06:14.725 "suffix": "-pre", 00:06:14.725 "commit": "77aac3af8" 00:06:14.726 } 00:06:14.726 } 00:06:14.726 16:02:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.726 16:02:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.726 16:02:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.726 16:02:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.726 16:02:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.726 16:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.726 16:02:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.726 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.726 16:02:15 -- app/cmdline.sh@26 -- # sort 00:06:14.726 16:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.726 16:02:15 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.726 16:02:15 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.726 16:02:15 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.726 16:02:15 -- common/autotest_common.sh@638 -- # local es=0 00:06:14.726 16:02:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.726 16:02:15 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.726 16:02:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.726 16:02:15 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.726 16:02:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.726 16:02:15 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.726 16:02:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.726 16:02:15 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.726 16:02:15 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.726 16:02:15 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.984 request: 00:06:14.984 { 00:06:14.984 "method": "env_dpdk_get_mem_stats", 00:06:14.984 "req_id": 1 00:06:14.984 } 00:06:14.984 Got JSON-RPC error response 00:06:14.984 response: 00:06:14.984 { 00:06:14.984 "code": -32601, 00:06:14.984 "message": "Method not found" 00:06:14.984 } 00:06:14.984 16:02:16 -- common/autotest_common.sh@641 -- # es=1 00:06:14.984 16:02:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:14.984 16:02:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:14.984 16:02:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:14.984 16:02:16 -- app/cmdline.sh@1 -- # killprocess 3298394 00:06:14.984 16:02:16 -- common/autotest_common.sh@936 -- # '[' -z 3298394 ']' 00:06:14.984 16:02:16 -- common/autotest_common.sh@940 -- # kill -0 3298394 00:06:14.984 16:02:16 -- common/autotest_common.sh@941 -- # uname 00:06:14.984 16:02:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.984 16:02:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3298394 00:06:14.984 16:02:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.984 16:02:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.984 16:02:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3298394' 00:06:14.984 killing process with pid 3298394 00:06:14.984 16:02:16 -- common/autotest_common.sh@955 -- # kill 3298394 00:06:14.984 16:02:16 -- common/autotest_common.sh@960 -- # wait 3298394 00:06:15.551 00:06:15.551 real 0m1.661s 00:06:15.551 user 0m2.042s 00:06:15.551 sys 0m0.471s 00:06:15.551 16:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.551 16:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.551 ************************************ 00:06:15.551 END TEST app_cmdline 00:06:15.551 ************************************ 00:06:15.551 16:02:16 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.551 16:02:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.551 16:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.551 16:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.809 ************************************ 00:06:15.809 START TEST version 00:06:15.809 ************************************ 00:06:15.809 16:02:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.809 * Looking for test storage... 00:06:15.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.809 16:02:16 -- app/version.sh@17 -- # get_header_version major 00:06:15.809 16:02:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.809 16:02:16 -- app/version.sh@14 -- # cut -f2 00:06:15.809 16:02:16 -- app/version.sh@14 -- # tr -d '"' 00:06:15.809 16:02:16 -- app/version.sh@17 -- # major=24 00:06:15.809 16:02:16 -- app/version.sh@18 -- # get_header_version minor 00:06:15.809 16:02:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.809 16:02:16 -- app/version.sh@14 -- # cut -f2 00:06:15.809 16:02:16 -- app/version.sh@14 -- # tr -d '"' 00:06:15.809 16:02:16 -- app/version.sh@18 -- # minor=5 00:06:15.809 16:02:16 -- app/version.sh@19 -- # get_header_version patch 00:06:15.809 16:02:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.809 16:02:16 -- app/version.sh@14 -- # cut -f2 00:06:15.809 16:02:16 -- app/version.sh@14 -- # tr -d '"' 00:06:15.809 16:02:16 -- app/version.sh@19 -- # patch=0 00:06:15.809 16:02:16 -- app/version.sh@20 -- # get_header_version suffix 00:06:15.809 16:02:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.809 16:02:16 -- app/version.sh@14 -- # cut -f2 00:06:15.809 16:02:16 -- app/version.sh@14 -- # tr -d '"' 00:06:15.809 16:02:16 -- app/version.sh@20 -- # suffix=-pre 00:06:15.809 16:02:16 -- app/version.sh@22 -- # version=24.5 00:06:15.809 16:02:16 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.809 16:02:16 -- app/version.sh@28 -- # version=24.5rc0 00:06:15.809 16:02:16 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.809 16:02:16 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.809 16:02:16 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:15.809 16:02:16 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:15.809 00:06:15.809 real 0m0.112s 00:06:15.809 user 0m0.065s 00:06:15.809 sys 0m0.067s 00:06:15.809 16:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.809 16:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.809 ************************************ 00:06:15.809 END TEST version 00:06:15.809 ************************************ 00:06:15.809 16:02:16 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:15.809 16:02:16 -- spdk/autotest.sh@194 -- # uname -s 00:06:15.809 16:02:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:15.809 16:02:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.809 16:02:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.809 16:02:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:15.809 16:02:16 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:15.809 16:02:16 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:15.809 16:02:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.809 16:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.809 16:02:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:15.809 16:02:17 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:15.809 16:02:17 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:15.809 16:02:17 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:15.809 16:02:17 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:15.809 16:02:17 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:15.809 16:02:17 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.809 16:02:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.809 16:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.809 16:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.105 ************************************ 00:06:16.105 START TEST nvmf_tcp 00:06:16.105 ************************************ 00:06:16.105 16:02:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.105 * Looking for test storage... 00:06:16.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.105 16:02:17 -- nvmf/common.sh@7 -- # uname -s 00:06:16.105 16:02:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.105 16:02:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.105 16:02:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.105 16:02:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.105 16:02:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.105 16:02:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.105 16:02:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.105 16:02:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.105 16:02:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.105 16:02:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.105 16:02:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:16.105 16:02:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:16.105 16:02:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.105 16:02:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.105 16:02:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.105 16:02:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.105 16:02:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.105 16:02:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.105 16:02:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.105 16:02:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.105 16:02:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.105 16:02:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.105 16:02:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.105 16:02:17 -- paths/export.sh@5 -- # export PATH 00:06:16.105 16:02:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.105 16:02:17 -- nvmf/common.sh@47 -- # : 0 00:06:16.105 16:02:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.105 16:02:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.105 16:02:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.105 16:02:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.105 16:02:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.105 16:02:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.105 16:02:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.105 16:02:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:16.105 16:02:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.105 16:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:16.105 16:02:17 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:16.105 16:02:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:16.105 16:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.105 16:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.105 ************************************ 00:06:16.105 START TEST nvmf_example 00:06:16.105 ************************************ 00:06:16.105 16:02:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:16.105 * Looking for test storage... 00:06:16.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.105 16:02:17 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.105 16:02:17 -- nvmf/common.sh@7 -- # uname -s 00:06:16.105 16:02:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.105 16:02:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.105 16:02:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.105 16:02:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.105 16:02:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.105 16:02:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.105 16:02:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.105 16:02:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.105 16:02:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.105 16:02:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.105 16:02:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:16.105 16:02:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:16.105 16:02:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.105 16:02:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.105 16:02:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.105 16:02:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.105 16:02:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.105 16:02:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.105 16:02:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.105 16:02:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.105 16:02:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.106 16:02:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.106 16:02:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.106 16:02:17 -- paths/export.sh@5 -- # export PATH 00:06:16.106 16:02:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.106 16:02:17 -- nvmf/common.sh@47 -- # : 0 00:06:16.106 16:02:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.106 16:02:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.106 16:02:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.106 16:02:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.106 16:02:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.106 16:02:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.106 16:02:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.106 16:02:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.106 16:02:17 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:16.106 16:02:17 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:16.106 16:02:17 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:16.106 16:02:17 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:16.106 16:02:17 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:16.106 16:02:17 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:16.106 16:02:17 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:16.106 16:02:17 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:16.106 16:02:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.106 16:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.106 16:02:17 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:16.106 16:02:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:16.106 16:02:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.106 16:02:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:16.106 16:02:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:16.106 16:02:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:16.106 16:02:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.106 16:02:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.106 16:02:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.106 16:02:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:16.106 16:02:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:16.106 16:02:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.106 16:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:18.030 16:02:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:18.030 16:02:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:18.030 16:02:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:18.030 16:02:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:18.030 16:02:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:18.030 16:02:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:18.030 16:02:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:18.030 16:02:19 -- nvmf/common.sh@295 -- # net_devs=() 00:06:18.030 16:02:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:18.030 16:02:19 -- nvmf/common.sh@296 -- # e810=() 00:06:18.030 16:02:19 -- nvmf/common.sh@296 -- # local -ga e810 00:06:18.030 16:02:19 -- nvmf/common.sh@297 -- # x722=() 00:06:18.030 16:02:19 -- nvmf/common.sh@297 -- # local -ga x722 00:06:18.030 16:02:19 -- nvmf/common.sh@298 -- # mlx=() 00:06:18.030 16:02:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:18.030 16:02:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.031 16:02:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:18.031 16:02:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:18.031 16:02:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.031 16:02:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:18.031 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:18.031 16:02:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.031 16:02:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:18.031 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:18.031 16:02:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.031 16:02:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.031 16:02:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.031 16:02:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:18.031 Found net devices under 0000:09:00.0: cvl_0_0 00:06:18.031 16:02:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.031 16:02:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.031 16:02:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.031 16:02:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.031 16:02:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:18.031 Found net devices under 0000:09:00.1: cvl_0_1 00:06:18.031 16:02:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.031 16:02:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:18.031 16:02:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:18.031 16:02:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:18.031 16:02:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.031 16:02:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.031 16:02:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.031 16:02:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:18.031 16:02:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.031 16:02:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.031 16:02:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:18.031 16:02:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.031 16:02:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.031 16:02:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:18.031 16:02:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:18.031 16:02:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.031 16:02:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.289 16:02:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.289 16:02:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.289 16:02:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:18.289 16:02:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.289 16:02:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.289 16:02:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.289 16:02:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:06:18.289 00:06:18.289 --- 10.0.0.2 ping statistics --- 00:06:18.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.289 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:18.289 16:02:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:18.289 00:06:18.289 --- 10.0.0.1 ping statistics --- 00:06:18.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.289 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:18.289 16:02:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.289 16:02:19 -- nvmf/common.sh@411 -- # return 0 00:06:18.289 16:02:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:18.289 16:02:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.289 16:02:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:18.289 16:02:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:18.289 16:02:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.289 16:02:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:18.289 16:02:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:18.289 16:02:19 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:18.289 16:02:19 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:18.289 16:02:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.289 16:02:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.289 16:02:19 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:18.289 16:02:19 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:18.289 16:02:19 -- target/nvmf_example.sh@34 -- # nvmfpid=3300329 00:06:18.289 16:02:19 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:18.289 16:02:19 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:18.289 16:02:19 -- target/nvmf_example.sh@36 -- # waitforlisten 3300329 00:06:18.289 16:02:19 -- common/autotest_common.sh@817 -- # '[' -z 3300329 ']' 00:06:18.289 16:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.289 16:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.289 16:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.289 16:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.289 16:02:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.222 16:02:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.222 16:02:20 -- common/autotest_common.sh@850 -- # return 0 00:06:19.222 16:02:20 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:19.222 16:02:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.222 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.479 16:02:20 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.479 16:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.479 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.479 16:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.479 16:02:20 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:19.479 16:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.479 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.479 16:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.479 16:02:20 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:19.479 16:02:20 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.479 16:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.479 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.479 16:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.479 16:02:20 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:19.479 16:02:20 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:19.480 16:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.480 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.480 16:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.480 16:02:20 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.480 16:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.480 16:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.480 16:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.480 16:02:20 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:19.480 16:02:20 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:19.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.672 Initializing NVMe Controllers 00:06:31.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:31.672 Initialization complete. Launching workers. 00:06:31.672 ======================================================== 00:06:31.672 Latency(us) 00:06:31.672 Device Information : IOPS MiB/s Average min max 00:06:31.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15267.69 59.64 4192.86 812.70 17439.46 00:06:31.672 ======================================================== 00:06:31.672 Total : 15267.69 59.64 4192.86 812.70 17439.46 00:06:31.672 00:06:31.672 16:02:30 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:31.672 16:02:30 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:31.672 16:02:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:31.672 16:02:30 -- nvmf/common.sh@117 -- # sync 00:06:31.672 16:02:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:31.672 16:02:30 -- nvmf/common.sh@120 -- # set +e 00:06:31.672 16:02:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:31.672 16:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:31.672 rmmod nvme_tcp 00:06:31.672 rmmod nvme_fabrics 00:06:31.672 rmmod nvme_keyring 00:06:31.672 16:02:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:31.672 16:02:30 -- nvmf/common.sh@124 -- # set -e 00:06:31.672 16:02:30 -- nvmf/common.sh@125 -- # return 0 00:06:31.672 16:02:30 -- nvmf/common.sh@478 -- # '[' -n 3300329 ']' 00:06:31.672 16:02:30 -- nvmf/common.sh@479 -- # killprocess 3300329 00:06:31.672 16:02:30 -- common/autotest_common.sh@936 -- # '[' -z 3300329 ']' 00:06:31.672 16:02:30 -- common/autotest_common.sh@940 -- # kill -0 3300329 00:06:31.672 16:02:30 -- common/autotest_common.sh@941 -- # uname 00:06:31.672 16:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.672 16:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3300329 00:06:31.672 16:02:30 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:31.672 16:02:30 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:31.672 16:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3300329' 00:06:31.672 killing process with pid 3300329 00:06:31.672 16:02:30 -- common/autotest_common.sh@955 -- # kill 3300329 00:06:31.672 16:02:30 -- common/autotest_common.sh@960 -- # wait 3300329 00:06:31.672 nvmf threads initialize successfully 00:06:31.672 bdev subsystem init successfully 00:06:31.672 created a nvmf target service 00:06:31.672 create targets's poll groups done 00:06:31.672 all subsystems of target started 00:06:31.672 nvmf target is running 00:06:31.672 all subsystems of target stopped 00:06:31.672 destroy targets's poll groups done 00:06:31.672 destroyed the nvmf target service 00:06:31.672 bdev subsystem finish successfully 00:06:31.672 nvmf threads destroy successfully 00:06:31.672 16:02:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:31.672 16:02:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:31.672 16:02:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:31.672 16:02:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.672 16:02:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.672 16:02:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.672 16:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.672 16:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.930 16:02:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:31.930 16:02:33 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:31.930 16:02:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:31.930 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.191 00:06:32.191 real 0m15.953s 00:06:32.191 user 0m45.343s 00:06:32.191 sys 0m3.346s 00:06:32.191 16:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.191 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.191 ************************************ 00:06:32.191 END TEST nvmf_example 00:06:32.191 ************************************ 00:06:32.191 16:02:33 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:32.191 16:02:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.191 16:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.191 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.191 ************************************ 00:06:32.191 START TEST nvmf_filesystem 00:06:32.191 ************************************ 00:06:32.191 16:02:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:32.191 * Looking for test storage... 00:06:32.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.191 16:02:33 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:32.191 16:02:33 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:32.191 16:02:33 -- common/autotest_common.sh@34 -- # set -e 00:06:32.191 16:02:33 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:32.191 16:02:33 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:32.191 16:02:33 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:32.192 16:02:33 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:32.192 16:02:33 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:32.192 16:02:33 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:32.192 16:02:33 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:32.192 16:02:33 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:32.192 16:02:33 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:32.192 16:02:33 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:32.192 16:02:33 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:32.192 16:02:33 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:32.192 16:02:33 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:32.192 16:02:33 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:32.192 16:02:33 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:32.192 16:02:33 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:32.192 16:02:33 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:32.192 16:02:33 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:32.192 16:02:33 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:32.192 16:02:33 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:32.192 16:02:33 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:32.192 16:02:33 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:32.192 16:02:33 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:32.192 16:02:33 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:32.192 16:02:33 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:32.192 16:02:33 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:32.192 16:02:33 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:32.192 16:02:33 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:32.192 16:02:33 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:32.192 16:02:33 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:32.192 16:02:33 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:32.192 16:02:33 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:32.192 16:02:33 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:32.192 16:02:33 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:32.192 16:02:33 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:32.192 16:02:33 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:32.192 16:02:33 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:32.192 16:02:33 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:32.192 16:02:33 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:32.192 16:02:33 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:32.192 16:02:33 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:32.192 16:02:33 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:32.192 16:02:33 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:32.192 16:02:33 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:32.192 16:02:33 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:32.192 16:02:33 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:32.192 16:02:33 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:32.192 16:02:33 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:32.192 16:02:33 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:32.192 16:02:33 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:32.192 16:02:33 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:32.192 16:02:33 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:32.192 16:02:33 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:32.192 16:02:33 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:32.192 16:02:33 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:32.192 16:02:33 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:32.192 16:02:33 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:32.192 16:02:33 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:32.192 16:02:33 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:32.192 16:02:33 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:32.192 16:02:33 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:32.192 16:02:33 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:32.192 16:02:33 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:32.192 16:02:33 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:32.192 16:02:33 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:32.192 16:02:33 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:32.192 16:02:33 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:32.192 16:02:33 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:32.192 16:02:33 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:32.192 16:02:33 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:32.192 16:02:33 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:32.192 16:02:33 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:32.192 16:02:33 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:32.192 16:02:33 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:32.192 16:02:33 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:32.192 16:02:33 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:32.192 16:02:33 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:32.192 16:02:33 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:32.192 16:02:33 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:32.192 16:02:33 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:32.192 16:02:33 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:32.192 16:02:33 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:32.192 16:02:33 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:32.192 16:02:33 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:32.192 16:02:33 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.192 16:02:33 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.192 16:02:33 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.192 16:02:33 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.192 16:02:33 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:32.192 16:02:33 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:32.192 16:02:33 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:32.192 16:02:33 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:32.192 16:02:33 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:32.192 16:02:33 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:32.192 16:02:33 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:32.192 16:02:33 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:32.192 #define SPDK_CONFIG_H 00:06:32.192 #define SPDK_CONFIG_APPS 1 00:06:32.192 #define SPDK_CONFIG_ARCH native 00:06:32.192 #undef SPDK_CONFIG_ASAN 00:06:32.192 #undef SPDK_CONFIG_AVAHI 00:06:32.192 #undef SPDK_CONFIG_CET 00:06:32.192 #define SPDK_CONFIG_COVERAGE 1 00:06:32.192 #define SPDK_CONFIG_CROSS_PREFIX 00:06:32.192 #undef SPDK_CONFIG_CRYPTO 00:06:32.192 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:32.192 #undef SPDK_CONFIG_CUSTOMOCF 00:06:32.192 #undef SPDK_CONFIG_DAOS 00:06:32.192 #define SPDK_CONFIG_DAOS_DIR 00:06:32.192 #define SPDK_CONFIG_DEBUG 1 00:06:32.192 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:32.192 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:32.192 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:32.192 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:32.192 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:32.192 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:32.192 #define SPDK_CONFIG_EXAMPLES 1 00:06:32.192 #undef SPDK_CONFIG_FC 00:06:32.192 #define SPDK_CONFIG_FC_PATH 00:06:32.192 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:32.192 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:32.192 #undef SPDK_CONFIG_FUSE 00:06:32.192 #undef SPDK_CONFIG_FUZZER 00:06:32.192 #define SPDK_CONFIG_FUZZER_LIB 00:06:32.192 #undef SPDK_CONFIG_GOLANG 00:06:32.192 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:32.192 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:32.192 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:32.192 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:32.192 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:32.192 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:32.192 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:32.192 #define SPDK_CONFIG_IDXD 1 00:06:32.192 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:32.192 #undef SPDK_CONFIG_IPSEC_MB 00:06:32.192 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:32.192 #define SPDK_CONFIG_ISAL 1 00:06:32.192 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:32.192 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:32.192 #define SPDK_CONFIG_LIBDIR 00:06:32.192 #undef SPDK_CONFIG_LTO 00:06:32.192 #define SPDK_CONFIG_MAX_LCORES 00:06:32.192 #define SPDK_CONFIG_NVME_CUSE 1 00:06:32.192 #undef SPDK_CONFIG_OCF 00:06:32.192 #define SPDK_CONFIG_OCF_PATH 00:06:32.192 #define SPDK_CONFIG_OPENSSL_PATH 00:06:32.192 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:32.192 #define SPDK_CONFIG_PGO_DIR 00:06:32.192 #undef SPDK_CONFIG_PGO_USE 00:06:32.192 #define SPDK_CONFIG_PREFIX /usr/local 00:06:32.192 #undef SPDK_CONFIG_RAID5F 00:06:32.192 #undef SPDK_CONFIG_RBD 00:06:32.192 #define SPDK_CONFIG_RDMA 1 00:06:32.192 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:32.192 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:32.192 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:32.192 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:32.192 #define SPDK_CONFIG_SHARED 1 00:06:32.192 #undef SPDK_CONFIG_SMA 00:06:32.192 #define SPDK_CONFIG_TESTS 1 00:06:32.192 #undef SPDK_CONFIG_TSAN 00:06:32.192 #define SPDK_CONFIG_UBLK 1 00:06:32.193 #define SPDK_CONFIG_UBSAN 1 00:06:32.193 #undef SPDK_CONFIG_UNIT_TESTS 00:06:32.193 #undef SPDK_CONFIG_URING 00:06:32.193 #define SPDK_CONFIG_URING_PATH 00:06:32.193 #undef SPDK_CONFIG_URING_ZNS 00:06:32.193 #undef SPDK_CONFIG_USDT 00:06:32.193 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:32.193 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:32.193 #define SPDK_CONFIG_VFIO_USER 1 00:06:32.193 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:32.193 #define SPDK_CONFIG_VHOST 1 00:06:32.193 #define SPDK_CONFIG_VIRTIO 1 00:06:32.193 #undef SPDK_CONFIG_VTUNE 00:06:32.193 #define SPDK_CONFIG_VTUNE_DIR 00:06:32.193 #define SPDK_CONFIG_WERROR 1 00:06:32.193 #define SPDK_CONFIG_WPDK_DIR 00:06:32.193 #undef SPDK_CONFIG_XNVME 00:06:32.193 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:32.193 16:02:33 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:32.193 16:02:33 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.193 16:02:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.193 16:02:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.193 16:02:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.193 16:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.193 16:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.193 16:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.193 16:02:33 -- paths/export.sh@5 -- # export PATH 00:06:32.193 16:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.193 16:02:33 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.193 16:02:33 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.193 16:02:33 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:32.193 16:02:33 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:32.193 16:02:33 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:32.193 16:02:33 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.193 16:02:33 -- pm/common@67 -- # TEST_TAG=N/A 00:06:32.193 16:02:33 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:32.193 16:02:33 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:32.193 16:02:33 -- pm/common@71 -- # uname -s 00:06:32.193 16:02:33 -- pm/common@71 -- # PM_OS=Linux 00:06:32.193 16:02:33 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:32.193 16:02:33 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:32.193 16:02:33 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:32.193 16:02:33 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:32.193 16:02:33 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:32.193 16:02:33 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:32.193 16:02:33 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:32.193 16:02:33 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:32.193 16:02:33 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:32.193 16:02:33 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:32.193 16:02:33 -- common/autotest_common.sh@57 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:32.193 16:02:33 -- common/autotest_common.sh@61 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:32.193 16:02:33 -- common/autotest_common.sh@63 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:32.193 16:02:33 -- common/autotest_common.sh@65 -- # : 1 00:06:32.193 16:02:33 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:32.193 16:02:33 -- common/autotest_common.sh@67 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:32.193 16:02:33 -- common/autotest_common.sh@69 -- # : 00:06:32.193 16:02:33 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:32.193 16:02:33 -- common/autotest_common.sh@71 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:32.193 16:02:33 -- common/autotest_common.sh@73 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:32.193 16:02:33 -- common/autotest_common.sh@75 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:32.193 16:02:33 -- common/autotest_common.sh@77 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:32.193 16:02:33 -- common/autotest_common.sh@79 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:32.193 16:02:33 -- common/autotest_common.sh@81 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:32.193 16:02:33 -- common/autotest_common.sh@83 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:32.193 16:02:33 -- common/autotest_common.sh@85 -- # : 1 00:06:32.193 16:02:33 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:32.193 16:02:33 -- common/autotest_common.sh@87 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:32.193 16:02:33 -- common/autotest_common.sh@89 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:32.193 16:02:33 -- common/autotest_common.sh@91 -- # : 1 00:06:32.193 16:02:33 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:32.193 16:02:33 -- common/autotest_common.sh@93 -- # : 1 00:06:32.193 16:02:33 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:32.193 16:02:33 -- common/autotest_common.sh@95 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:32.193 16:02:33 -- common/autotest_common.sh@97 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:32.193 16:02:33 -- common/autotest_common.sh@99 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:32.193 16:02:33 -- common/autotest_common.sh@101 -- # : tcp 00:06:32.193 16:02:33 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:32.193 16:02:33 -- common/autotest_common.sh@103 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:32.193 16:02:33 -- common/autotest_common.sh@105 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:32.193 16:02:33 -- common/autotest_common.sh@107 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:32.193 16:02:33 -- common/autotest_common.sh@109 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:32.193 16:02:33 -- common/autotest_common.sh@111 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:32.193 16:02:33 -- common/autotest_common.sh@113 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:32.193 16:02:33 -- common/autotest_common.sh@115 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:32.193 16:02:33 -- common/autotest_common.sh@117 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:32.193 16:02:33 -- common/autotest_common.sh@119 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:32.193 16:02:33 -- common/autotest_common.sh@121 -- # : 1 00:06:32.193 16:02:33 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:32.193 16:02:33 -- common/autotest_common.sh@123 -- # : 00:06:32.193 16:02:33 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:32.193 16:02:33 -- common/autotest_common.sh@125 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:32.193 16:02:33 -- common/autotest_common.sh@127 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:32.193 16:02:33 -- common/autotest_common.sh@129 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:32.193 16:02:33 -- common/autotest_common.sh@131 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:32.193 16:02:33 -- common/autotest_common.sh@133 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:32.193 16:02:33 -- common/autotest_common.sh@135 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:32.193 16:02:33 -- common/autotest_common.sh@137 -- # : 00:06:32.193 16:02:33 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:32.193 16:02:33 -- common/autotest_common.sh@139 -- # : true 00:06:32.193 16:02:33 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:32.193 16:02:33 -- common/autotest_common.sh@141 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:32.193 16:02:33 -- common/autotest_common.sh@143 -- # : 0 00:06:32.193 16:02:33 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:32.194 16:02:33 -- common/autotest_common.sh@145 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:32.194 16:02:33 -- common/autotest_common.sh@147 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:32.194 16:02:33 -- common/autotest_common.sh@149 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:32.194 16:02:33 -- common/autotest_common.sh@151 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:32.194 16:02:33 -- common/autotest_common.sh@153 -- # : e810 00:06:32.194 16:02:33 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:32.194 16:02:33 -- common/autotest_common.sh@155 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:32.194 16:02:33 -- common/autotest_common.sh@157 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:32.194 16:02:33 -- common/autotest_common.sh@159 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:32.194 16:02:33 -- common/autotest_common.sh@161 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:32.194 16:02:33 -- common/autotest_common.sh@163 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:32.194 16:02:33 -- common/autotest_common.sh@166 -- # : 00:06:32.194 16:02:33 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:32.194 16:02:33 -- common/autotest_common.sh@168 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:32.194 16:02:33 -- common/autotest_common.sh@170 -- # : 0 00:06:32.194 16:02:33 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:32.194 16:02:33 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.194 16:02:33 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.194 16:02:33 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.194 16:02:33 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:32.194 16:02:33 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:32.194 16:02:33 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.194 16:02:33 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.194 16:02:33 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.194 16:02:33 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.194 16:02:33 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:32.194 16:02:33 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:32.194 16:02:33 -- common/autotest_common.sh@199 -- # cat 00:06:32.194 16:02:33 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:32.194 16:02:33 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.194 16:02:33 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.194 16:02:33 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.194 16:02:33 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.194 16:02:33 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:32.194 16:02:33 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:32.194 16:02:33 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.194 16:02:33 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.194 16:02:33 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.194 16:02:33 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.194 16:02:33 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.194 16:02:33 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.194 16:02:33 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.194 16:02:33 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.194 16:02:33 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.194 16:02:33 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.194 16:02:33 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:32.194 16:02:33 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:32.194 16:02:33 -- common/autotest_common.sh@252 -- # valgrind= 00:06:32.194 16:02:33 -- common/autotest_common.sh@258 -- # uname -s 00:06:32.194 16:02:33 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:32.194 16:02:33 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:32.194 16:02:33 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:32.194 16:02:33 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:32.194 16:02:33 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:06:32.194 16:02:33 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:32.194 16:02:33 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:32.194 16:02:33 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:32.194 16:02:33 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:32.194 16:02:33 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:32.194 16:02:33 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:32.194 16:02:33 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:32.194 16:02:33 -- common/autotest_common.sh@307 -- # [[ -z 3302159 ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@307 -- # kill -0 3302159 00:06:32.194 16:02:33 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:32.194 16:02:33 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:32.194 16:02:33 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:32.194 16:02:33 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:32.194 16:02:33 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:32.194 16:02:33 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:32.194 16:02:33 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:32.194 16:02:33 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.EFRPEt 00:06:32.194 16:02:33 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:32.194 16:02:33 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:32.194 16:02:33 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EFRPEt/tests/target /tmp/spdk.EFRPEt 00:06:32.194 16:02:33 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:32.194 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.194 16:02:33 -- common/autotest_common.sh@316 -- # df -T 00:06:32.194 16:02:33 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:32.194 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:32.194 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:32.194 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:32.194 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:32.194 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=54243614720 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67279552512 00:06:32.195 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=13035937792 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=33637163008 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33639776256 00:06:32.195 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=13447155712 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=13455912960 00:06:32.195 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=8757248 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=33639247872 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33639776256 00:06:32.195 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=528384 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # avails["$mount"]=6727950336 00:06:32.195 16:02:33 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6727954432 00:06:32.195 16:02:33 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:32.195 16:02:33 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.195 16:02:33 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:32.195 * Looking for test storage... 00:06:32.195 16:02:33 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:32.195 16:02:33 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:32.195 16:02:33 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.195 16:02:33 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:32.195 16:02:33 -- common/autotest_common.sh@361 -- # mount=/ 00:06:32.195 16:02:33 -- common/autotest_common.sh@363 -- # target_space=54243614720 00:06:32.195 16:02:33 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:32.195 16:02:33 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:32.195 16:02:33 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:32.195 16:02:33 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:32.195 16:02:33 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:32.195 16:02:33 -- common/autotest_common.sh@370 -- # new_size=15250530304 00:06:32.195 16:02:33 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:32.195 16:02:33 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.195 16:02:33 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.195 16:02:33 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.195 16:02:33 -- common/autotest_common.sh@378 -- # return 0 00:06:32.195 16:02:33 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:32.195 16:02:33 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:32.195 16:02:33 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:32.195 16:02:33 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:32.195 16:02:33 -- common/autotest_common.sh@1673 -- # true 00:06:32.195 16:02:33 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:32.195 16:02:33 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:32.195 16:02:33 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:32.195 16:02:33 -- common/autotest_common.sh@27 -- # exec 00:06:32.195 16:02:33 -- common/autotest_common.sh@29 -- # exec 00:06:32.195 16:02:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:32.195 16:02:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:32.195 16:02:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:32.195 16:02:33 -- common/autotest_common.sh@18 -- # set -x 00:06:32.195 16:02:33 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.453 16:02:33 -- nvmf/common.sh@7 -- # uname -s 00:06:32.453 16:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.453 16:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.453 16:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.453 16:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.453 16:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.453 16:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.453 16:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.453 16:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.453 16:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.453 16:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.453 16:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:32.453 16:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:32.453 16:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.453 16:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.453 16:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.453 16:02:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.453 16:02:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.453 16:02:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.453 16:02:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.453 16:02:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.453 16:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.453 16:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.454 16:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.454 16:02:33 -- paths/export.sh@5 -- # export PATH 00:06:32.454 16:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.454 16:02:33 -- nvmf/common.sh@47 -- # : 0 00:06:32.454 16:02:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.454 16:02:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.454 16:02:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.454 16:02:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.454 16:02:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.454 16:02:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.454 16:02:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.454 16:02:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.454 16:02:33 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:32.454 16:02:33 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:32.454 16:02:33 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:32.454 16:02:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:32.454 16:02:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.454 16:02:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:32.454 16:02:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:32.454 16:02:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:32.454 16:02:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.454 16:02:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.454 16:02:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.454 16:02:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:32.454 16:02:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:32.454 16:02:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.454 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:34.352 16:02:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:34.352 16:02:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:34.352 16:02:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:34.352 16:02:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:34.352 16:02:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:34.352 16:02:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:34.352 16:02:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:34.352 16:02:35 -- nvmf/common.sh@295 -- # net_devs=() 00:06:34.352 16:02:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:34.352 16:02:35 -- nvmf/common.sh@296 -- # e810=() 00:06:34.352 16:02:35 -- nvmf/common.sh@296 -- # local -ga e810 00:06:34.352 16:02:35 -- nvmf/common.sh@297 -- # x722=() 00:06:34.352 16:02:35 -- nvmf/common.sh@297 -- # local -ga x722 00:06:34.352 16:02:35 -- nvmf/common.sh@298 -- # mlx=() 00:06:34.352 16:02:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:34.352 16:02:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.352 16:02:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:34.352 16:02:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:34.352 16:02:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:34.352 16:02:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.352 16:02:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:34.352 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:34.352 16:02:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.352 16:02:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:34.352 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:34.352 16:02:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:34.352 16:02:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:34.352 16:02:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.352 16:02:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.352 16:02:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.352 16:02:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.352 16:02:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:34.352 Found net devices under 0000:09:00.0: cvl_0_0 00:06:34.352 16:02:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.352 16:02:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.353 16:02:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.353 16:02:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.353 16:02:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.353 16:02:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:34.353 Found net devices under 0000:09:00.1: cvl_0_1 00:06:34.353 16:02:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.353 16:02:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:34.353 16:02:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:34.353 16:02:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:34.353 16:02:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:34.353 16:02:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:34.353 16:02:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.353 16:02:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.353 16:02:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.353 16:02:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:34.353 16:02:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.353 16:02:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.353 16:02:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:34.353 16:02:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.353 16:02:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.353 16:02:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.353 16:02:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.353 16:02:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.353 16:02:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.353 16:02:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.353 16:02:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.353 16:02:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.353 16:02:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.353 16:02:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.353 16:02:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.353 16:02:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:06:34.353 00:06:34.353 --- 10.0.0.2 ping statistics --- 00:06:34.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.353 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:34.353 16:02:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:06:34.353 00:06:34.353 --- 10.0.0.1 ping statistics --- 00:06:34.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.353 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:06:34.353 16:02:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.353 16:02:35 -- nvmf/common.sh@411 -- # return 0 00:06:34.353 16:02:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:34.353 16:02:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.353 16:02:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:34.353 16:02:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:34.353 16:02:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.353 16:02:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:34.353 16:02:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:34.353 16:02:35 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:34.353 16:02:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:34.353 16:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.353 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.353 ************************************ 00:06:34.353 START TEST nvmf_filesystem_no_in_capsule 00:06:34.353 ************************************ 00:06:34.353 16:02:35 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:34.353 16:02:35 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:34.353 16:02:35 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.353 16:02:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:34.353 16:02:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:34.353 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.353 16:02:35 -- nvmf/common.sh@470 -- # nvmfpid=3303790 00:06:34.353 16:02:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.353 16:02:35 -- nvmf/common.sh@471 -- # waitforlisten 3303790 00:06:34.353 16:02:35 -- common/autotest_common.sh@817 -- # '[' -z 3303790 ']' 00:06:34.353 16:02:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.353 16:02:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.353 16:02:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.353 16:02:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.353 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.353 [2024-04-24 16:02:35.618587] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:34.353 [2024-04-24 16:02:35.618657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.612 [2024-04-24 16:02:35.686477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.612 [2024-04-24 16:02:35.803706] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.612 [2024-04-24 16:02:35.803767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.612 [2024-04-24 16:02:35.803799] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.612 [2024-04-24 16:02:35.803812] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.612 [2024-04-24 16:02:35.803823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.612 [2024-04-24 16:02:35.803884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.612 [2024-04-24 16:02:35.803954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.612 [2024-04-24 16:02:35.803977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.612 [2024-04-24 16:02:35.803980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.870 16:02:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.870 16:02:35 -- common/autotest_common.sh@850 -- # return 0 00:06:34.870 16:02:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:34.870 16:02:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:34.870 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 16:02:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.870 16:02:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:34.870 16:02:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:34.870 16:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.870 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 [2024-04-24 16:02:35.963435] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.870 16:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.870 16:02:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:34.870 16:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.870 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 Malloc1 00:06:34.870 16:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.870 16:02:36 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:34.870 16:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.870 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 16:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.870 16:02:36 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:34.870 16:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.870 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 16:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.870 16:02:36 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.870 16:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.870 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.870 [2024-04-24 16:02:36.149147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.870 16:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.870 16:02:36 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:34.870 16:02:36 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:34.870 16:02:36 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:34.870 16:02:36 -- common/autotest_common.sh@1366 -- # local bs 00:06:34.870 16:02:36 -- common/autotest_common.sh@1367 -- # local nb 00:06:35.128 16:02:36 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:35.128 16:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.128 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.128 16:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.128 16:02:36 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:35.128 { 00:06:35.128 "name": "Malloc1", 00:06:35.128 "aliases": [ 00:06:35.128 "cffd0185-3c74-4f40-9dc6-cd6929b50d72" 00:06:35.128 ], 00:06:35.128 "product_name": "Malloc disk", 00:06:35.128 "block_size": 512, 00:06:35.128 "num_blocks": 1048576, 00:06:35.128 "uuid": "cffd0185-3c74-4f40-9dc6-cd6929b50d72", 00:06:35.128 "assigned_rate_limits": { 00:06:35.128 "rw_ios_per_sec": 0, 00:06:35.128 "rw_mbytes_per_sec": 0, 00:06:35.128 "r_mbytes_per_sec": 0, 00:06:35.128 "w_mbytes_per_sec": 0 00:06:35.128 }, 00:06:35.128 "claimed": true, 00:06:35.128 "claim_type": "exclusive_write", 00:06:35.128 "zoned": false, 00:06:35.128 "supported_io_types": { 00:06:35.128 "read": true, 00:06:35.128 "write": true, 00:06:35.128 "unmap": true, 00:06:35.128 "write_zeroes": true, 00:06:35.128 "flush": true, 00:06:35.128 "reset": true, 00:06:35.128 "compare": false, 00:06:35.128 "compare_and_write": false, 00:06:35.128 "abort": true, 00:06:35.128 "nvme_admin": false, 00:06:35.128 "nvme_io": false 00:06:35.128 }, 00:06:35.128 "memory_domains": [ 00:06:35.128 { 00:06:35.128 "dma_device_id": "system", 00:06:35.128 "dma_device_type": 1 00:06:35.128 }, 00:06:35.128 { 00:06:35.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.128 "dma_device_type": 2 00:06:35.128 } 00:06:35.128 ], 00:06:35.128 "driver_specific": {} 00:06:35.128 } 00:06:35.128 ]' 00:06:35.128 16:02:36 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:35.128 16:02:36 -- common/autotest_common.sh@1369 -- # bs=512 00:06:35.128 16:02:36 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:35.128 16:02:36 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:35.128 16:02:36 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:35.128 16:02:36 -- common/autotest_common.sh@1374 -- # echo 512 00:06:35.128 16:02:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:35.128 16:02:36 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:35.691 16:02:36 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:35.691 16:02:36 -- common/autotest_common.sh@1184 -- # local i=0 00:06:35.691 16:02:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:35.691 16:02:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:35.691 16:02:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:37.588 16:02:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:37.588 16:02:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:37.588 16:02:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:37.588 16:02:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:37.588 16:02:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:37.588 16:02:38 -- common/autotest_common.sh@1194 -- # return 0 00:06:37.588 16:02:38 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:37.588 16:02:38 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:37.588 16:02:38 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:37.588 16:02:38 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:37.588 16:02:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:37.588 16:02:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:37.588 16:02:38 -- setup/common.sh@80 -- # echo 536870912 00:06:37.588 16:02:38 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:37.588 16:02:38 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:37.588 16:02:38 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:37.588 16:02:38 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:37.846 16:02:39 -- target/filesystem.sh@69 -- # partprobe 00:06:38.410 16:02:39 -- target/filesystem.sh@70 -- # sleep 1 00:06:39.782 16:02:40 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:39.782 16:02:40 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:39.782 16:02:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:39.782 16:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.782 16:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:39.782 ************************************ 00:06:39.782 START TEST filesystem_ext4 00:06:39.782 ************************************ 00:06:39.782 16:02:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:39.782 16:02:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:39.782 16:02:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.782 16:02:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:39.782 16:02:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:39.782 16:02:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:39.782 16:02:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:39.782 16:02:40 -- common/autotest_common.sh@915 -- # local force 00:06:39.782 16:02:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:39.782 16:02:40 -- common/autotest_common.sh@918 -- # force=-F 00:06:39.782 16:02:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:39.782 mke2fs 1.46.5 (30-Dec-2021) 00:06:39.782 Discarding device blocks: 0/522240 done 00:06:39.782 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:39.782 Filesystem UUID: 344903a7-89cc-4c8b-8ac2-e215a0068a56 00:06:39.782 Superblock backups stored on blocks: 00:06:39.782 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:39.782 00:06:39.782 Allocating group tables: 0/64 done 00:06:39.782 Writing inode tables: 0/64 done 00:06:39.782 Creating journal (8192 blocks): done 00:06:39.782 Writing superblocks and filesystem accounting information: 0/64 done 00:06:39.782 00:06:39.782 16:02:41 -- common/autotest_common.sh@931 -- # return 0 00:06:39.782 16:02:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.039 16:02:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.040 16:02:41 -- target/filesystem.sh@25 -- # sync 00:06:40.040 16:02:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.040 16:02:41 -- target/filesystem.sh@27 -- # sync 00:06:40.040 16:02:41 -- target/filesystem.sh@29 -- # i=0 00:06:40.040 16:02:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:40.040 16:02:41 -- target/filesystem.sh@37 -- # kill -0 3303790 00:06:40.040 16:02:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:40.040 16:02:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:40.040 16:02:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:40.040 16:02:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:40.040 00:06:40.040 real 0m0.459s 00:06:40.040 user 0m0.018s 00:06:40.040 sys 0m0.032s 00:06:40.040 16:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.040 16:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.040 ************************************ 00:06:40.040 END TEST filesystem_ext4 00:06:40.040 ************************************ 00:06:40.040 16:02:41 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:40.040 16:02:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:40.040 16:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.040 16:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.297 ************************************ 00:06:40.297 START TEST filesystem_btrfs 00:06:40.297 ************************************ 00:06:40.297 16:02:41 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:40.297 16:02:41 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:40.297 16:02:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.297 16:02:41 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:40.297 16:02:41 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:40.298 16:02:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:40.298 16:02:41 -- common/autotest_common.sh@914 -- # local i=0 00:06:40.298 16:02:41 -- common/autotest_common.sh@915 -- # local force 00:06:40.298 16:02:41 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:40.298 16:02:41 -- common/autotest_common.sh@920 -- # force=-f 00:06:40.298 16:02:41 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:40.555 btrfs-progs v6.6.2 00:06:40.555 See https://btrfs.readthedocs.io for more information. 00:06:40.555 00:06:40.555 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:40.555 NOTE: several default settings have changed in version 5.15, please make sure 00:06:40.555 this does not affect your deployments: 00:06:40.555 - DUP for metadata (-m dup) 00:06:40.555 - enabled no-holes (-O no-holes) 00:06:40.555 - enabled free-space-tree (-R free-space-tree) 00:06:40.555 00:06:40.555 Label: (null) 00:06:40.555 UUID: 4129f59b-a84f-4803-bd48-5d4aa215bf2a 00:06:40.555 Node size: 16384 00:06:40.555 Sector size: 4096 00:06:40.555 Filesystem size: 510.00MiB 00:06:40.555 Block group profiles: 00:06:40.555 Data: single 8.00MiB 00:06:40.555 Metadata: DUP 32.00MiB 00:06:40.555 System: DUP 8.00MiB 00:06:40.555 SSD detected: yes 00:06:40.555 Zoned device: no 00:06:40.555 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:40.555 Runtime features: free-space-tree 00:06:40.555 Checksum: crc32c 00:06:40.555 Number of devices: 1 00:06:40.555 Devices: 00:06:40.555 ID SIZE PATH 00:06:40.555 1 510.00MiB /dev/nvme0n1p1 00:06:40.555 00:06:40.555 16:02:41 -- common/autotest_common.sh@931 -- # return 0 00:06:40.555 16:02:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.812 16:02:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.812 16:02:42 -- target/filesystem.sh@25 -- # sync 00:06:40.812 16:02:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.812 16:02:42 -- target/filesystem.sh@27 -- # sync 00:06:41.071 16:02:42 -- target/filesystem.sh@29 -- # i=0 00:06:41.071 16:02:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.071 16:02:42 -- target/filesystem.sh@37 -- # kill -0 3303790 00:06:41.071 16:02:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.071 16:02:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.071 16:02:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.071 16:02:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.071 00:06:41.071 real 0m0.742s 00:06:41.071 user 0m0.011s 00:06:41.071 sys 0m0.048s 00:06:41.071 16:02:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.071 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.071 ************************************ 00:06:41.071 END TEST filesystem_btrfs 00:06:41.071 ************************************ 00:06:41.071 16:02:42 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:41.071 16:02:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:41.071 16:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.071 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.071 ************************************ 00:06:41.071 START TEST filesystem_xfs 00:06:41.071 ************************************ 00:06:41.071 16:02:42 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:41.071 16:02:42 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:41.071 16:02:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.071 16:02:42 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:41.071 16:02:42 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:41.071 16:02:42 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:41.071 16:02:42 -- common/autotest_common.sh@914 -- # local i=0 00:06:41.071 16:02:42 -- common/autotest_common.sh@915 -- # local force 00:06:41.071 16:02:42 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:41.071 16:02:42 -- common/autotest_common.sh@920 -- # force=-f 00:06:41.071 16:02:42 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:41.071 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:41.071 = sectsz=512 attr=2, projid32bit=1 00:06:41.071 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:41.071 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:41.071 data = bsize=4096 blocks=130560, imaxpct=25 00:06:41.071 = sunit=0 swidth=0 blks 00:06:41.071 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:41.071 log =internal log bsize=4096 blocks=16384, version=2 00:06:41.071 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:41.071 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:42.004 Discarding blocks...Done. 00:06:42.004 16:02:43 -- common/autotest_common.sh@931 -- # return 0 00:06:42.004 16:02:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.534 16:02:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.534 16:02:45 -- target/filesystem.sh@25 -- # sync 00:06:44.534 16:02:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.534 16:02:45 -- target/filesystem.sh@27 -- # sync 00:06:44.534 16:02:45 -- target/filesystem.sh@29 -- # i=0 00:06:44.534 16:02:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.534 16:02:45 -- target/filesystem.sh@37 -- # kill -0 3303790 00:06:44.534 16:02:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.534 16:02:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.534 16:02:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.534 16:02:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.534 00:06:44.534 real 0m3.100s 00:06:44.534 user 0m0.015s 00:06:44.534 sys 0m0.041s 00:06:44.534 16:02:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.534 16:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.534 ************************************ 00:06:44.534 END TEST filesystem_xfs 00:06:44.534 ************************************ 00:06:44.534 16:02:45 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:44.534 16:02:45 -- target/filesystem.sh@93 -- # sync 00:06:44.534 16:02:45 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:44.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.534 16:02:45 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:44.534 16:02:45 -- common/autotest_common.sh@1205 -- # local i=0 00:06:44.534 16:02:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:44.535 16:02:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.535 16:02:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:44.535 16:02:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.535 16:02:45 -- common/autotest_common.sh@1217 -- # return 0 00:06:44.535 16:02:45 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.535 16:02:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.535 16:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.535 16:02:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.535 16:02:45 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:44.535 16:02:45 -- target/filesystem.sh@101 -- # killprocess 3303790 00:06:44.535 16:02:45 -- common/autotest_common.sh@936 -- # '[' -z 3303790 ']' 00:06:44.535 16:02:45 -- common/autotest_common.sh@940 -- # kill -0 3303790 00:06:44.535 16:02:45 -- common/autotest_common.sh@941 -- # uname 00:06:44.535 16:02:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.535 16:02:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3303790 00:06:44.535 16:02:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.535 16:02:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.535 16:02:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3303790' 00:06:44.535 killing process with pid 3303790 00:06:44.535 16:02:45 -- common/autotest_common.sh@955 -- # kill 3303790 00:06:44.535 16:02:45 -- common/autotest_common.sh@960 -- # wait 3303790 00:06:44.793 16:02:46 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:44.793 00:06:44.793 real 0m10.434s 00:06:44.793 user 0m39.810s 00:06:44.793 sys 0m1.730s 00:06:44.793 16:02:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.793 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 END TEST nvmf_filesystem_no_in_capsule 00:06:44.793 ************************************ 00:06:44.793 16:02:46 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:44.793 16:02:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.793 16:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.793 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.051 ************************************ 00:06:45.051 START TEST nvmf_filesystem_in_capsule 00:06:45.051 ************************************ 00:06:45.051 16:02:46 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:45.051 16:02:46 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:45.051 16:02:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:45.051 16:02:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:45.051 16:02:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.051 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.051 16:02:46 -- nvmf/common.sh@470 -- # nvmfpid=3305251 00:06:45.051 16:02:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.051 16:02:46 -- nvmf/common.sh@471 -- # waitforlisten 3305251 00:06:45.051 16:02:46 -- common/autotest_common.sh@817 -- # '[' -z 3305251 ']' 00:06:45.051 16:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.051 16:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.051 16:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.051 16:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.051 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.051 [2024-04-24 16:02:46.165615] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:06:45.051 [2024-04-24 16:02:46.165683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.051 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.051 [2024-04-24 16:02:46.227479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.051 [2024-04-24 16:02:46.336549] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.051 [2024-04-24 16:02:46.336608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.051 [2024-04-24 16:02:46.336624] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.051 [2024-04-24 16:02:46.336638] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.051 [2024-04-24 16:02:46.336650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.051 [2024-04-24 16:02:46.336729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.051 [2024-04-24 16:02:46.336833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.051 [2024-04-24 16:02:46.336787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.051 [2024-04-24 16:02:46.336838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.310 16:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:45.310 16:02:46 -- common/autotest_common.sh@850 -- # return 0 00:06:45.310 16:02:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:45.310 16:02:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.310 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.310 16:02:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.310 16:02:46 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:45.310 16:02:46 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:45.310 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.310 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.310 [2024-04-24 16:02:46.485570] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.310 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.310 16:02:46 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:45.310 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.310 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 Malloc1 00:06:45.569 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.569 16:02:46 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:45.569 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.569 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.569 16:02:46 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:45.569 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.569 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.569 16:02:46 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.569 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.569 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 [2024-04-24 16:02:46.673111] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.569 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.569 16:02:46 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:45.569 16:02:46 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:45.569 16:02:46 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:45.569 16:02:46 -- common/autotest_common.sh@1366 -- # local bs 00:06:45.569 16:02:46 -- common/autotest_common.sh@1367 -- # local nb 00:06:45.569 16:02:46 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:45.569 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.569 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.569 16:02:46 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:45.569 { 00:06:45.569 "name": "Malloc1", 00:06:45.569 "aliases": [ 00:06:45.569 "95be8a7c-c880-4b4c-9141-e5ea841c0927" 00:06:45.569 ], 00:06:45.569 "product_name": "Malloc disk", 00:06:45.569 "block_size": 512, 00:06:45.569 "num_blocks": 1048576, 00:06:45.569 "uuid": "95be8a7c-c880-4b4c-9141-e5ea841c0927", 00:06:45.569 "assigned_rate_limits": { 00:06:45.569 "rw_ios_per_sec": 0, 00:06:45.569 "rw_mbytes_per_sec": 0, 00:06:45.569 "r_mbytes_per_sec": 0, 00:06:45.569 "w_mbytes_per_sec": 0 00:06:45.569 }, 00:06:45.569 "claimed": true, 00:06:45.569 "claim_type": "exclusive_write", 00:06:45.569 "zoned": false, 00:06:45.569 "supported_io_types": { 00:06:45.569 "read": true, 00:06:45.569 "write": true, 00:06:45.569 "unmap": true, 00:06:45.569 "write_zeroes": true, 00:06:45.569 "flush": true, 00:06:45.569 "reset": true, 00:06:45.569 "compare": false, 00:06:45.569 "compare_and_write": false, 00:06:45.569 "abort": true, 00:06:45.569 "nvme_admin": false, 00:06:45.569 "nvme_io": false 00:06:45.569 }, 00:06:45.569 "memory_domains": [ 00:06:45.569 { 00:06:45.569 "dma_device_id": "system", 00:06:45.569 "dma_device_type": 1 00:06:45.569 }, 00:06:45.569 { 00:06:45.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.569 "dma_device_type": 2 00:06:45.569 } 00:06:45.569 ], 00:06:45.569 "driver_specific": {} 00:06:45.569 } 00:06:45.569 ]' 00:06:45.569 16:02:46 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:45.569 16:02:46 -- common/autotest_common.sh@1369 -- # bs=512 00:06:45.569 16:02:46 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:45.569 16:02:46 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:45.569 16:02:46 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:45.569 16:02:46 -- common/autotest_common.sh@1374 -- # echo 512 00:06:45.569 16:02:46 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:45.569 16:02:46 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:46.135 16:02:47 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:46.135 16:02:47 -- common/autotest_common.sh@1184 -- # local i=0 00:06:46.135 16:02:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:46.135 16:02:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:46.135 16:02:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:48.661 16:02:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:48.661 16:02:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:48.661 16:02:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:48.661 16:02:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:48.661 16:02:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:48.661 16:02:49 -- common/autotest_common.sh@1194 -- # return 0 00:06:48.661 16:02:49 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:48.661 16:02:49 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:48.661 16:02:49 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:48.661 16:02:49 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:48.661 16:02:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:48.661 16:02:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:48.661 16:02:49 -- setup/common.sh@80 -- # echo 536870912 00:06:48.661 16:02:49 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:48.661 16:02:49 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:48.661 16:02:49 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:48.661 16:02:49 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:48.661 16:02:49 -- target/filesystem.sh@69 -- # partprobe 00:06:49.594 16:02:50 -- target/filesystem.sh@70 -- # sleep 1 00:06:50.526 16:02:51 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:50.526 16:02:51 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:50.527 16:02:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:50.527 16:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.527 16:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:50.527 ************************************ 00:06:50.527 START TEST filesystem_in_capsule_ext4 00:06:50.527 ************************************ 00:06:50.527 16:02:51 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:50.527 16:02:51 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:50.527 16:02:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.527 16:02:51 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:50.527 16:02:51 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:50.527 16:02:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:50.527 16:02:51 -- common/autotest_common.sh@914 -- # local i=0 00:06:50.527 16:02:51 -- common/autotest_common.sh@915 -- # local force 00:06:50.527 16:02:51 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:50.527 16:02:51 -- common/autotest_common.sh@918 -- # force=-F 00:06:50.527 16:02:51 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:50.527 mke2fs 1.46.5 (30-Dec-2021) 00:06:50.527 Discarding device blocks: 0/522240 done 00:06:50.527 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:50.527 Filesystem UUID: 0869477c-d950-400c-92d0-3d003985b07d 00:06:50.527 Superblock backups stored on blocks: 00:06:50.527 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:50.527 00:06:50.527 Allocating group tables: 0/64 done 00:06:50.527 Writing inode tables: 0/64 done 00:06:50.784 Creating journal (8192 blocks): done 00:06:51.299 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:51.299 00:06:51.299 16:02:52 -- common/autotest_common.sh@931 -- # return 0 00:06:51.299 16:02:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.239 16:02:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.239 16:02:53 -- target/filesystem.sh@25 -- # sync 00:06:52.239 16:02:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.239 16:02:53 -- target/filesystem.sh@27 -- # sync 00:06:52.239 16:02:53 -- target/filesystem.sh@29 -- # i=0 00:06:52.239 16:02:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.239 16:02:53 -- target/filesystem.sh@37 -- # kill -0 3305251 00:06:52.239 16:02:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.239 16:02:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.239 16:02:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.239 16:02:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.239 00:06:52.239 real 0m1.630s 00:06:52.239 user 0m0.007s 00:06:52.239 sys 0m0.036s 00:06:52.239 16:02:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.239 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 ************************************ 00:06:52.239 END TEST filesystem_in_capsule_ext4 00:06:52.239 ************************************ 00:06:52.239 16:02:53 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:52.239 16:02:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:52.239 16:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.239 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 ************************************ 00:06:52.239 START TEST filesystem_in_capsule_btrfs 00:06:52.239 ************************************ 00:06:52.239 16:02:53 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:52.239 16:02:53 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:52.239 16:02:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:52.239 16:02:53 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:52.239 16:02:53 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:52.239 16:02:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:52.239 16:02:53 -- common/autotest_common.sh@914 -- # local i=0 00:06:52.239 16:02:53 -- common/autotest_common.sh@915 -- # local force 00:06:52.239 16:02:53 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:52.239 16:02:53 -- common/autotest_common.sh@920 -- # force=-f 00:06:52.239 16:02:53 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:52.497 btrfs-progs v6.6.2 00:06:52.497 See https://btrfs.readthedocs.io for more information. 00:06:52.497 00:06:52.497 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:52.497 NOTE: several default settings have changed in version 5.15, please make sure 00:06:52.497 this does not affect your deployments: 00:06:52.497 - DUP for metadata (-m dup) 00:06:52.497 - enabled no-holes (-O no-holes) 00:06:52.497 - enabled free-space-tree (-R free-space-tree) 00:06:52.497 00:06:52.497 Label: (null) 00:06:52.497 UUID: 563fd3cb-6ecf-4212-a1f2-fb81872da57b 00:06:52.497 Node size: 16384 00:06:52.497 Sector size: 4096 00:06:52.497 Filesystem size: 510.00MiB 00:06:52.497 Block group profiles: 00:06:52.497 Data: single 8.00MiB 00:06:52.497 Metadata: DUP 32.00MiB 00:06:52.497 System: DUP 8.00MiB 00:06:52.497 SSD detected: yes 00:06:52.497 Zoned device: no 00:06:52.497 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:52.497 Runtime features: free-space-tree 00:06:52.497 Checksum: crc32c 00:06:52.497 Number of devices: 1 00:06:52.497 Devices: 00:06:52.497 ID SIZE PATH 00:06:52.497 1 510.00MiB /dev/nvme0n1p1 00:06:52.497 00:06:52.497 16:02:53 -- common/autotest_common.sh@931 -- # return 0 00:06:52.498 16:02:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:53.431 16:02:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:53.431 16:02:54 -- target/filesystem.sh@25 -- # sync 00:06:53.431 16:02:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:53.431 16:02:54 -- target/filesystem.sh@27 -- # sync 00:06:53.431 16:02:54 -- target/filesystem.sh@29 -- # i=0 00:06:53.431 16:02:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:53.690 16:02:54 -- target/filesystem.sh@37 -- # kill -0 3305251 00:06:53.690 16:02:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:53.690 16:02:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:53.690 16:02:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:53.690 16:02:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:53.690 00:06:53.690 real 0m1.320s 00:06:53.690 user 0m0.013s 00:06:53.690 sys 0m0.050s 00:06:53.690 16:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.690 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.690 ************************************ 00:06:53.690 END TEST filesystem_in_capsule_btrfs 00:06:53.690 ************************************ 00:06:53.690 16:02:54 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:53.690 16:02:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:53.690 16:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.690 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.690 ************************************ 00:06:53.690 START TEST filesystem_in_capsule_xfs 00:06:53.690 ************************************ 00:06:53.690 16:02:54 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:53.690 16:02:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:53.690 16:02:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:53.690 16:02:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:53.690 16:02:54 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:53.690 16:02:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:53.690 16:02:54 -- common/autotest_common.sh@914 -- # local i=0 00:06:53.690 16:02:54 -- common/autotest_common.sh@915 -- # local force 00:06:53.690 16:02:54 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:53.690 16:02:54 -- common/autotest_common.sh@920 -- # force=-f 00:06:53.690 16:02:54 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:53.690 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:53.690 = sectsz=512 attr=2, projid32bit=1 00:06:53.690 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:53.690 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:53.690 data = bsize=4096 blocks=130560, imaxpct=25 00:06:53.690 = sunit=0 swidth=0 blks 00:06:53.690 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:53.690 log =internal log bsize=4096 blocks=16384, version=2 00:06:53.690 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:53.690 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:54.623 Discarding blocks...Done. 00:06:54.623 16:02:55 -- common/autotest_common.sh@931 -- # return 0 00:06:54.623 16:02:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.622 16:02:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.622 16:02:57 -- target/filesystem.sh@25 -- # sync 00:06:56.622 16:02:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.622 16:02:57 -- target/filesystem.sh@27 -- # sync 00:06:56.622 16:02:57 -- target/filesystem.sh@29 -- # i=0 00:06:56.622 16:02:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.622 16:02:57 -- target/filesystem.sh@37 -- # kill -0 3305251 00:06:56.622 16:02:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.622 16:02:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.622 16:02:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.622 16:02:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.622 00:06:56.622 real 0m2.696s 00:06:56.622 user 0m0.012s 00:06:56.622 sys 0m0.040s 00:06:56.622 16:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.622 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:06:56.622 ************************************ 00:06:56.622 END TEST filesystem_in_capsule_xfs 00:06:56.622 ************************************ 00:06:56.622 16:02:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:56.622 16:02:57 -- target/filesystem.sh@93 -- # sync 00:06:56.622 16:02:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:56.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.622 16:02:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:56.622 16:02:57 -- common/autotest_common.sh@1205 -- # local i=0 00:06:56.622 16:02:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:56.622 16:02:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.622 16:02:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:56.622 16:02:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.622 16:02:57 -- common/autotest_common.sh@1217 -- # return 0 00:06:56.622 16:02:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.622 16:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.622 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:06:56.622 16:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.622 16:02:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:56.622 16:02:57 -- target/filesystem.sh@101 -- # killprocess 3305251 00:06:56.622 16:02:57 -- common/autotest_common.sh@936 -- # '[' -z 3305251 ']' 00:06:56.622 16:02:57 -- common/autotest_common.sh@940 -- # kill -0 3305251 00:06:56.622 16:02:57 -- common/autotest_common.sh@941 -- # uname 00:06:56.622 16:02:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.622 16:02:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3305251 00:06:56.622 16:02:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.622 16:02:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.622 16:02:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3305251' 00:06:56.622 killing process with pid 3305251 00:06:56.622 16:02:57 -- common/autotest_common.sh@955 -- # kill 3305251 00:06:56.622 16:02:57 -- common/autotest_common.sh@960 -- # wait 3305251 00:06:57.205 16:02:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:57.205 00:06:57.205 real 0m12.210s 00:06:57.205 user 0m46.730s 00:06:57.205 sys 0m1.863s 00:06:57.205 16:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.205 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:57.205 ************************************ 00:06:57.205 END TEST nvmf_filesystem_in_capsule 00:06:57.205 ************************************ 00:06:57.205 16:02:58 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:57.205 16:02:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:57.205 16:02:58 -- nvmf/common.sh@117 -- # sync 00:06:57.205 16:02:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.205 16:02:58 -- nvmf/common.sh@120 -- # set +e 00:06:57.205 16:02:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.205 16:02:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.205 rmmod nvme_tcp 00:06:57.205 rmmod nvme_fabrics 00:06:57.205 rmmod nvme_keyring 00:06:57.205 16:02:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.205 16:02:58 -- nvmf/common.sh@124 -- # set -e 00:06:57.205 16:02:58 -- nvmf/common.sh@125 -- # return 0 00:06:57.205 16:02:58 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:57.205 16:02:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:57.205 16:02:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:57.205 16:02:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:57.205 16:02:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.205 16:02:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.205 16:02:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.205 16:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.205 16:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.739 16:03:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:59.739 00:06:59.739 real 0m27.110s 00:06:59.739 user 1m27.446s 00:06:59.739 sys 0m5.146s 00:06:59.739 16:03:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.739 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 ************************************ 00:06:59.740 END TEST nvmf_filesystem 00:06:59.740 ************************************ 00:06:59.740 16:03:00 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:59.740 16:03:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:59.740 16:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.740 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:59.740 ************************************ 00:06:59.740 START TEST nvmf_discovery 00:06:59.740 ************************************ 00:06:59.740 16:03:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:59.740 * Looking for test storage... 00:06:59.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.740 16:03:00 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.740 16:03:00 -- nvmf/common.sh@7 -- # uname -s 00:06:59.740 16:03:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.740 16:03:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.740 16:03:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.740 16:03:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.740 16:03:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.740 16:03:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.740 16:03:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.740 16:03:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.740 16:03:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.740 16:03:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.740 16:03:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:59.740 16:03:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:59.740 16:03:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.740 16:03:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.740 16:03:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.740 16:03:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.740 16:03:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.740 16:03:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.740 16:03:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.740 16:03:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.740 16:03:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.740 16:03:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.740 16:03:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.740 16:03:00 -- paths/export.sh@5 -- # export PATH 00:06:59.740 16:03:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.740 16:03:00 -- nvmf/common.sh@47 -- # : 0 00:06:59.740 16:03:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.740 16:03:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.740 16:03:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.740 16:03:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.740 16:03:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.740 16:03:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.740 16:03:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.740 16:03:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.740 16:03:00 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:59.740 16:03:00 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:59.740 16:03:00 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:59.740 16:03:00 -- target/discovery.sh@15 -- # hash nvme 00:06:59.740 16:03:00 -- target/discovery.sh@20 -- # nvmftestinit 00:06:59.740 16:03:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:59.740 16:03:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.740 16:03:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:59.740 16:03:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:59.740 16:03:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:59.740 16:03:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.740 16:03:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.740 16:03:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.740 16:03:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:59.740 16:03:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:59.740 16:03:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.740 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:07:01.641 16:03:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:01.641 16:03:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.641 16:03:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.641 16:03:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.641 16:03:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.641 16:03:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.641 16:03:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.641 16:03:02 -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.641 16:03:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.641 16:03:02 -- nvmf/common.sh@296 -- # e810=() 00:07:01.641 16:03:02 -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.641 16:03:02 -- nvmf/common.sh@297 -- # x722=() 00:07:01.641 16:03:02 -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.641 16:03:02 -- nvmf/common.sh@298 -- # mlx=() 00:07:01.641 16:03:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.641 16:03:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.641 16:03:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.642 16:03:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:01.642 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:01.642 16:03:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.642 16:03:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:01.642 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:01.642 16:03:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.642 16:03:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.642 16:03:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.642 16:03:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:01.642 Found net devices under 0000:09:00.0: cvl_0_0 00:07:01.642 16:03:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.642 16:03:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.642 16:03:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.642 16:03:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:01.642 Found net devices under 0000:09:00.1: cvl_0_1 00:07:01.642 16:03:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:01.642 16:03:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:01.642 16:03:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.642 16:03:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.642 16:03:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.642 16:03:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.642 16:03:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.642 16:03:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.642 16:03:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.642 16:03:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.642 16:03:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.642 16:03:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.642 16:03:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.642 16:03:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.642 16:03:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.642 16:03:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.642 16:03:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.642 16:03:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.642 16:03:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.642 16:03:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.642 16:03:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:07:01.642 00:07:01.642 --- 10.0.0.2 ping statistics --- 00:07:01.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.642 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:07:01.642 16:03:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:01.642 00:07:01.642 --- 10.0.0.1 ping statistics --- 00:07:01.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.642 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:01.642 16:03:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.642 16:03:02 -- nvmf/common.sh@411 -- # return 0 00:07:01.642 16:03:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:01.642 16:03:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.642 16:03:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:01.642 16:03:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.642 16:03:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:01.642 16:03:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:01.642 16:03:02 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:01.642 16:03:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:01.642 16:03:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:01.642 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.642 16:03:02 -- nvmf/common.sh@470 -- # nvmfpid=3308881 00:07:01.642 16:03:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.642 16:03:02 -- nvmf/common.sh@471 -- # waitforlisten 3308881 00:07:01.642 16:03:02 -- common/autotest_common.sh@817 -- # '[' -z 3308881 ']' 00:07:01.642 16:03:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.642 16:03:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.642 16:03:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.642 16:03:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.642 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.642 [2024-04-24 16:03:02.829148] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:07:01.642 [2024-04-24 16:03:02.829229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.642 [2024-04-24 16:03:02.891101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.900 [2024-04-24 16:03:02.999529] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.900 [2024-04-24 16:03:02.999582] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.900 [2024-04-24 16:03:02.999598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.900 [2024-04-24 16:03:02.999611] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.900 [2024-04-24 16:03:02.999624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.900 [2024-04-24 16:03:02.999692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.900 [2024-04-24 16:03:02.999767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.900 [2024-04-24 16:03:02.999863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.900 [2024-04-24 16:03:02.999866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.900 16:03:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.900 16:03:03 -- common/autotest_common.sh@850 -- # return 0 00:07:01.900 16:03:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:01.900 16:03:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 16:03:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.900 16:03:03 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.900 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 [2024-04-24 16:03:03.147441] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.900 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.900 16:03:03 -- target/discovery.sh@26 -- # seq 1 4 00:07:01.900 16:03:03 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.900 16:03:03 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:01.900 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 Null1 00:07:01.900 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.900 16:03:03 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.900 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.900 16:03:03 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:01.900 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.900 16:03:03 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.900 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.900 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 [2024-04-24 16:03:03.187804] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.159 16:03:03 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 Null2 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.159 16:03:03 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 Null3 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.159 16:03:03 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 Null4 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:02.159 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.159 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.159 16:03:03 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:07:02.159 00:07:02.159 Discovery Log Number of Records 6, Generation counter 6 00:07:02.159 =====Discovery Log Entry 0====== 00:07:02.159 trtype: tcp 00:07:02.159 adrfam: ipv4 00:07:02.159 subtype: current discovery subsystem 00:07:02.159 treq: not required 00:07:02.159 portid: 0 00:07:02.159 trsvcid: 4420 00:07:02.159 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:02.159 traddr: 10.0.0.2 00:07:02.159 eflags: explicit discovery connections, duplicate discovery information 00:07:02.159 sectype: none 00:07:02.159 =====Discovery Log Entry 1====== 00:07:02.159 trtype: tcp 00:07:02.159 adrfam: ipv4 00:07:02.159 subtype: nvme subsystem 00:07:02.159 treq: not required 00:07:02.159 portid: 0 00:07:02.159 trsvcid: 4420 00:07:02.159 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:02.159 traddr: 10.0.0.2 00:07:02.159 eflags: none 00:07:02.159 sectype: none 00:07:02.159 =====Discovery Log Entry 2====== 00:07:02.159 trtype: tcp 00:07:02.159 adrfam: ipv4 00:07:02.159 subtype: nvme subsystem 00:07:02.159 treq: not required 00:07:02.159 portid: 0 00:07:02.159 trsvcid: 4420 00:07:02.159 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:02.160 traddr: 10.0.0.2 00:07:02.160 eflags: none 00:07:02.160 sectype: none 00:07:02.160 =====Discovery Log Entry 3====== 00:07:02.160 trtype: tcp 00:07:02.160 adrfam: ipv4 00:07:02.160 subtype: nvme subsystem 00:07:02.160 treq: not required 00:07:02.160 portid: 0 00:07:02.160 trsvcid: 4420 00:07:02.160 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:02.160 traddr: 10.0.0.2 00:07:02.160 eflags: none 00:07:02.160 sectype: none 00:07:02.160 =====Discovery Log Entry 4====== 00:07:02.160 trtype: tcp 00:07:02.160 adrfam: ipv4 00:07:02.160 subtype: nvme subsystem 00:07:02.160 treq: not required 00:07:02.160 portid: 0 00:07:02.160 trsvcid: 4420 00:07:02.160 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:02.160 traddr: 10.0.0.2 00:07:02.160 eflags: none 00:07:02.160 sectype: none 00:07:02.160 =====Discovery Log Entry 5====== 00:07:02.160 trtype: tcp 00:07:02.160 adrfam: ipv4 00:07:02.160 subtype: discovery subsystem referral 00:07:02.160 treq: not required 00:07:02.160 portid: 0 00:07:02.160 trsvcid: 4430 00:07:02.160 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:02.160 traddr: 10.0.0.2 00:07:02.160 eflags: none 00:07:02.160 sectype: none 00:07:02.160 16:03:03 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:02.160 Perform nvmf subsystem discovery via RPC 00:07:02.160 16:03:03 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 [2024-04-24 16:03:03.372138] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:02.160 [ 00:07:02.160 { 00:07:02.160 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:02.160 "subtype": "Discovery", 00:07:02.160 "listen_addresses": [ 00:07:02.160 { 00:07:02.160 "transport": "TCP", 00:07:02.160 "trtype": "TCP", 00:07:02.160 "adrfam": "IPv4", 00:07:02.160 "traddr": "10.0.0.2", 00:07:02.160 "trsvcid": "4420" 00:07:02.160 } 00:07:02.160 ], 00:07:02.160 "allow_any_host": true, 00:07:02.160 "hosts": [] 00:07:02.160 }, 00:07:02.160 { 00:07:02.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:02.160 "subtype": "NVMe", 00:07:02.160 "listen_addresses": [ 00:07:02.160 { 00:07:02.160 "transport": "TCP", 00:07:02.160 "trtype": "TCP", 00:07:02.160 "adrfam": "IPv4", 00:07:02.160 "traddr": "10.0.0.2", 00:07:02.160 "trsvcid": "4420" 00:07:02.160 } 00:07:02.160 ], 00:07:02.160 "allow_any_host": true, 00:07:02.160 "hosts": [], 00:07:02.160 "serial_number": "SPDK00000000000001", 00:07:02.160 "model_number": "SPDK bdev Controller", 00:07:02.160 "max_namespaces": 32, 00:07:02.160 "min_cntlid": 1, 00:07:02.160 "max_cntlid": 65519, 00:07:02.160 "namespaces": [ 00:07:02.160 { 00:07:02.160 "nsid": 1, 00:07:02.160 "bdev_name": "Null1", 00:07:02.160 "name": "Null1", 00:07:02.160 "nguid": "2B9A3BC97BD44A68B508D5D5495A6411", 00:07:02.160 "uuid": "2b9a3bc9-7bd4-4a68-b508-d5d5495a6411" 00:07:02.160 } 00:07:02.160 ] 00:07:02.160 }, 00:07:02.160 { 00:07:02.160 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:02.160 "subtype": "NVMe", 00:07:02.160 "listen_addresses": [ 00:07:02.160 { 00:07:02.160 "transport": "TCP", 00:07:02.160 "trtype": "TCP", 00:07:02.160 "adrfam": "IPv4", 00:07:02.160 "traddr": "10.0.0.2", 00:07:02.160 "trsvcid": "4420" 00:07:02.160 } 00:07:02.160 ], 00:07:02.160 "allow_any_host": true, 00:07:02.160 "hosts": [], 00:07:02.160 "serial_number": "SPDK00000000000002", 00:07:02.160 "model_number": "SPDK bdev Controller", 00:07:02.160 "max_namespaces": 32, 00:07:02.160 "min_cntlid": 1, 00:07:02.160 "max_cntlid": 65519, 00:07:02.160 "namespaces": [ 00:07:02.160 { 00:07:02.160 "nsid": 1, 00:07:02.160 "bdev_name": "Null2", 00:07:02.160 "name": "Null2", 00:07:02.160 "nguid": "D49705A368B54D1E8C4422F2CA74ECA0", 00:07:02.160 "uuid": "d49705a3-68b5-4d1e-8c44-22f2ca74eca0" 00:07:02.160 } 00:07:02.160 ] 00:07:02.160 }, 00:07:02.160 { 00:07:02.160 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:02.160 "subtype": "NVMe", 00:07:02.160 "listen_addresses": [ 00:07:02.160 { 00:07:02.160 "transport": "TCP", 00:07:02.160 "trtype": "TCP", 00:07:02.160 "adrfam": "IPv4", 00:07:02.160 "traddr": "10.0.0.2", 00:07:02.160 "trsvcid": "4420" 00:07:02.160 } 00:07:02.160 ], 00:07:02.160 "allow_any_host": true, 00:07:02.160 "hosts": [], 00:07:02.160 "serial_number": "SPDK00000000000003", 00:07:02.160 "model_number": "SPDK bdev Controller", 00:07:02.160 "max_namespaces": 32, 00:07:02.160 "min_cntlid": 1, 00:07:02.160 "max_cntlid": 65519, 00:07:02.160 "namespaces": [ 00:07:02.160 { 00:07:02.160 "nsid": 1, 00:07:02.160 "bdev_name": "Null3", 00:07:02.160 "name": "Null3", 00:07:02.160 "nguid": "C1FA9E7B2152407FAC105A989635387F", 00:07:02.160 "uuid": "c1fa9e7b-2152-407f-ac10-5a989635387f" 00:07:02.160 } 00:07:02.160 ] 00:07:02.160 }, 00:07:02.160 { 00:07:02.160 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:02.160 "subtype": "NVMe", 00:07:02.160 "listen_addresses": [ 00:07:02.160 { 00:07:02.160 "transport": "TCP", 00:07:02.160 "trtype": "TCP", 00:07:02.160 "adrfam": "IPv4", 00:07:02.160 "traddr": "10.0.0.2", 00:07:02.160 "trsvcid": "4420" 00:07:02.160 } 00:07:02.160 ], 00:07:02.160 "allow_any_host": true, 00:07:02.160 "hosts": [], 00:07:02.160 "serial_number": "SPDK00000000000004", 00:07:02.160 "model_number": "SPDK bdev Controller", 00:07:02.160 "max_namespaces": 32, 00:07:02.160 "min_cntlid": 1, 00:07:02.160 "max_cntlid": 65519, 00:07:02.160 "namespaces": [ 00:07:02.160 { 00:07:02.160 "nsid": 1, 00:07:02.160 "bdev_name": "Null4", 00:07:02.160 "name": "Null4", 00:07:02.160 "nguid": "2FF2F8297BF44DFF92C649B7939CC8DC", 00:07:02.160 "uuid": "2ff2f829-7bf4-4dff-92c6-49b7939cc8dc" 00:07:02.160 } 00:07:02.160 ] 00:07:02.160 } 00:07:02.160 ] 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@42 -- # seq 1 4 00:07:02.160 16:03:03 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.160 16:03:03 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.160 16:03:03 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.160 16:03:03 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.160 16:03:03 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.160 16:03:03 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:02.160 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.160 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.419 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.419 16:03:03 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:02.419 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.419 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.419 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.419 16:03:03 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:02.419 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.419 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.419 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.419 16:03:03 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:02.419 16:03:03 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:02.419 16:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.419 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.419 16:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.419 16:03:03 -- target/discovery.sh@49 -- # check_bdevs= 00:07:02.419 16:03:03 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:02.419 16:03:03 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:02.419 16:03:03 -- target/discovery.sh@57 -- # nvmftestfini 00:07:02.419 16:03:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:02.419 16:03:03 -- nvmf/common.sh@117 -- # sync 00:07:02.419 16:03:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.419 16:03:03 -- nvmf/common.sh@120 -- # set +e 00:07:02.419 16:03:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.419 16:03:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.419 rmmod nvme_tcp 00:07:02.419 rmmod nvme_fabrics 00:07:02.419 rmmod nvme_keyring 00:07:02.419 16:03:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.419 16:03:03 -- nvmf/common.sh@124 -- # set -e 00:07:02.419 16:03:03 -- nvmf/common.sh@125 -- # return 0 00:07:02.419 16:03:03 -- nvmf/common.sh@478 -- # '[' -n 3308881 ']' 00:07:02.419 16:03:03 -- nvmf/common.sh@479 -- # killprocess 3308881 00:07:02.419 16:03:03 -- common/autotest_common.sh@936 -- # '[' -z 3308881 ']' 00:07:02.419 16:03:03 -- common/autotest_common.sh@940 -- # kill -0 3308881 00:07:02.419 16:03:03 -- common/autotest_common.sh@941 -- # uname 00:07:02.419 16:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.419 16:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3308881 00:07:02.419 16:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.419 16:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.419 16:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3308881' 00:07:02.419 killing process with pid 3308881 00:07:02.419 16:03:03 -- common/autotest_common.sh@955 -- # kill 3308881 00:07:02.419 [2024-04-24 16:03:03.583132] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:02.419 16:03:03 -- common/autotest_common.sh@960 -- # wait 3308881 00:07:02.678 16:03:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:02.678 16:03:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:02.678 16:03:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:02.678 16:03:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.678 16:03:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.678 16:03:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.678 16:03:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.678 16:03:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.220 16:03:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.220 00:07:05.220 real 0m5.349s 00:07:05.220 user 0m4.102s 00:07:05.220 sys 0m1.820s 00:07:05.220 16:03:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.220 16:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 ************************************ 00:07:05.220 END TEST nvmf_discovery 00:07:05.220 ************************************ 00:07:05.220 16:03:05 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:05.221 16:03:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.221 16:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.221 16:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:05.221 ************************************ 00:07:05.221 START TEST nvmf_referrals 00:07:05.221 ************************************ 00:07:05.221 16:03:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:05.221 * Looking for test storage... 00:07:05.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.221 16:03:06 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.221 16:03:06 -- nvmf/common.sh@7 -- # uname -s 00:07:05.221 16:03:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.221 16:03:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.221 16:03:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.221 16:03:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.221 16:03:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.221 16:03:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.221 16:03:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.221 16:03:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.221 16:03:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.221 16:03:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.221 16:03:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:05.221 16:03:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:05.221 16:03:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.221 16:03:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.221 16:03:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.221 16:03:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.221 16:03:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.221 16:03:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.221 16:03:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.221 16:03:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.221 16:03:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.221 16:03:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.221 16:03:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.221 16:03:06 -- paths/export.sh@5 -- # export PATH 00:07:05.221 16:03:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.221 16:03:06 -- nvmf/common.sh@47 -- # : 0 00:07:05.221 16:03:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.221 16:03:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.221 16:03:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.221 16:03:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.221 16:03:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.221 16:03:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.221 16:03:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.221 16:03:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.221 16:03:06 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:05.221 16:03:06 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:05.221 16:03:06 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:05.221 16:03:06 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:05.221 16:03:06 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:05.221 16:03:06 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:05.221 16:03:06 -- target/referrals.sh@37 -- # nvmftestinit 00:07:05.221 16:03:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:05.221 16:03:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.221 16:03:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:05.221 16:03:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:05.221 16:03:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:05.221 16:03:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.221 16:03:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.221 16:03:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.221 16:03:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:05.221 16:03:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:05.221 16:03:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.221 16:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.119 16:03:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:07.119 16:03:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.119 16:03:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.119 16:03:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.119 16:03:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.119 16:03:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.119 16:03:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.119 16:03:08 -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.119 16:03:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.119 16:03:08 -- nvmf/common.sh@296 -- # e810=() 00:07:07.119 16:03:08 -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.119 16:03:08 -- nvmf/common.sh@297 -- # x722=() 00:07:07.119 16:03:08 -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.119 16:03:08 -- nvmf/common.sh@298 -- # mlx=() 00:07:07.119 16:03:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.119 16:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.119 16:03:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.119 16:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:07.119 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:07.119 16:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.119 16:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:07.119 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:07.119 16:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.119 16:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.119 16:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.119 16:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:07.119 Found net devices under 0000:09:00.0: cvl_0_0 00:07:07.119 16:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.119 16:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.119 16:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.119 16:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:07.119 Found net devices under 0000:09:00.1: cvl_0_1 00:07:07.119 16:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:07.119 16:03:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:07.119 16:03:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.119 16:03:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.119 16:03:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.119 16:03:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.119 16:03:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.119 16:03:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.119 16:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.119 16:03:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.119 16:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.119 16:03:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.119 16:03:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.119 16:03:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.119 16:03:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.119 16:03:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.119 16:03:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.119 16:03:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.119 16:03:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.119 16:03:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.119 16:03:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:07:07.119 00:07:07.119 --- 10.0.0.2 ping statistics --- 00:07:07.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.119 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:07.119 16:03:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:07.119 00:07:07.119 --- 10.0.0.1 ping statistics --- 00:07:07.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.119 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:07.119 16:03:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.119 16:03:08 -- nvmf/common.sh@411 -- # return 0 00:07:07.119 16:03:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:07.119 16:03:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.119 16:03:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:07.119 16:03:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:07.120 16:03:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.120 16:03:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:07.120 16:03:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:07.120 16:03:08 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:07.120 16:03:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:07.120 16:03:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:07.120 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.120 16:03:08 -- nvmf/common.sh@470 -- # nvmfpid=3311340 00:07:07.120 16:03:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.120 16:03:08 -- nvmf/common.sh@471 -- # waitforlisten 3311340 00:07:07.120 16:03:08 -- common/autotest_common.sh@817 -- # '[' -z 3311340 ']' 00:07:07.120 16:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.120 16:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:07.120 16:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.120 16:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:07.120 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.120 [2024-04-24 16:03:08.248421] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:07:07.120 [2024-04-24 16:03:08.248501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.120 [2024-04-24 16:03:08.319627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.377 [2024-04-24 16:03:08.436605] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.377 [2024-04-24 16:03:08.436661] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.377 [2024-04-24 16:03:08.436675] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.377 [2024-04-24 16:03:08.436687] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.377 [2024-04-24 16:03:08.436697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.377 [2024-04-24 16:03:08.439767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.377 [2024-04-24 16:03:08.439807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.377 [2024-04-24 16:03:08.439886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.377 [2024-04-24 16:03:08.439890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.377 16:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.377 16:03:08 -- common/autotest_common.sh@850 -- # return 0 00:07:07.377 16:03:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:07.377 16:03:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 16:03:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.377 16:03:08 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 [2024-04-24 16:03:08.589457] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.377 16:03:08 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 [2024-04-24 16:03:08.601666] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.377 16:03:08 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.377 16:03:08 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.377 16:03:08 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.377 16:03:08 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.377 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.377 16:03:08 -- target/referrals.sh@48 -- # jq length 00:07:07.377 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:07.635 16:03:08 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:07.635 16:03:08 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:07.635 16:03:08 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.635 16:03:08 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:07.635 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.635 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 16:03:08 -- target/referrals.sh@21 -- # sort 00:07:07.635 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:07.635 16:03:08 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:07.635 16:03:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.635 16:03:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.635 16:03:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.635 16:03:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.635 16:03:08 -- target/referrals.sh@26 -- # sort 00:07:07.635 16:03:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:07.635 16:03:08 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:07.635 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.635 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:07.635 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.635 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.635 16:03:08 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:07.635 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.635 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.892 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.892 16:03:08 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.892 16:03:08 -- target/referrals.sh@56 -- # jq length 00:07:07.892 16:03:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.892 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.892 16:03:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.892 16:03:08 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:07.892 16:03:08 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:07.892 16:03:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.892 16:03:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.892 16:03:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.893 16:03:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.893 16:03:08 -- target/referrals.sh@26 -- # sort 00:07:07.893 16:03:09 -- target/referrals.sh@26 -- # echo 00:07:07.893 16:03:09 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:07.893 16:03:09 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:07.893 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.893 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.893 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.893 16:03:09 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:07.893 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.893 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.893 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.893 16:03:09 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:07.893 16:03:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:07.893 16:03:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.893 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.893 16:03:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:07.893 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.893 16:03:09 -- target/referrals.sh@21 -- # sort 00:07:07.893 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.893 16:03:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:07.893 16:03:09 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:07.893 16:03:09 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:07.893 16:03:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.893 16:03:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.893 16:03:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.893 16:03:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.893 16:03:09 -- target/referrals.sh@26 -- # sort 00:07:08.150 16:03:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:08.150 16:03:09 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:08.150 16:03:09 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:08.150 16:03:09 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:08.150 16:03:09 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:08.150 16:03:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.150 16:03:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:08.150 16:03:09 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:08.150 16:03:09 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:08.150 16:03:09 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:08.150 16:03:09 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:08.150 16:03:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.150 16:03:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:08.408 16:03:09 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:08.408 16:03:09 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:08.408 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.408 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:08.408 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.408 16:03:09 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:08.408 16:03:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:08.408 16:03:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.408 16:03:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:08.408 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.408 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:08.408 16:03:09 -- target/referrals.sh@21 -- # sort 00:07:08.408 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.408 16:03:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:08.408 16:03:09 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:08.408 16:03:09 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:08.408 16:03:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.408 16:03:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.408 16:03:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.408 16:03:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.408 16:03:09 -- target/referrals.sh@26 -- # sort 00:07:08.408 16:03:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:08.408 16:03:09 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:08.408 16:03:09 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:08.408 16:03:09 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:08.408 16:03:09 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:08.408 16:03:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.408 16:03:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:08.665 16:03:09 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:08.665 16:03:09 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:08.665 16:03:09 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:08.665 16:03:09 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:08.665 16:03:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.665 16:03:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:08.665 16:03:09 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:08.665 16:03:09 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:08.665 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.665 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:08.665 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.665 16:03:09 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.665 16:03:09 -- target/referrals.sh@82 -- # jq length 00:07:08.665 16:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.665 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:08.665 16:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.665 16:03:09 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:08.665 16:03:09 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:08.665 16:03:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.665 16:03:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.665 16:03:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.665 16:03:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.665 16:03:09 -- target/referrals.sh@26 -- # sort 00:07:08.923 16:03:09 -- target/referrals.sh@26 -- # echo 00:07:08.923 16:03:09 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:08.923 16:03:09 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:08.923 16:03:09 -- target/referrals.sh@86 -- # nvmftestfini 00:07:08.923 16:03:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:08.923 16:03:09 -- nvmf/common.sh@117 -- # sync 00:07:08.923 16:03:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.923 16:03:09 -- nvmf/common.sh@120 -- # set +e 00:07:08.923 16:03:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.924 16:03:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.924 rmmod nvme_tcp 00:07:08.924 rmmod nvme_fabrics 00:07:08.924 rmmod nvme_keyring 00:07:08.924 16:03:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.924 16:03:10 -- nvmf/common.sh@124 -- # set -e 00:07:08.924 16:03:10 -- nvmf/common.sh@125 -- # return 0 00:07:08.924 16:03:10 -- nvmf/common.sh@478 -- # '[' -n 3311340 ']' 00:07:08.924 16:03:10 -- nvmf/common.sh@479 -- # killprocess 3311340 00:07:08.924 16:03:10 -- common/autotest_common.sh@936 -- # '[' -z 3311340 ']' 00:07:08.924 16:03:10 -- common/autotest_common.sh@940 -- # kill -0 3311340 00:07:08.924 16:03:10 -- common/autotest_common.sh@941 -- # uname 00:07:08.924 16:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.924 16:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3311340 00:07:08.924 16:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.924 16:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.924 16:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3311340' 00:07:08.924 killing process with pid 3311340 00:07:08.924 16:03:10 -- common/autotest_common.sh@955 -- # kill 3311340 00:07:08.924 16:03:10 -- common/autotest_common.sh@960 -- # wait 3311340 00:07:09.188 16:03:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:09.188 16:03:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:09.188 16:03:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:09.188 16:03:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.188 16:03:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:09.188 16:03:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.188 16:03:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.188 16:03:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.093 16:03:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.093 00:07:11.093 real 0m6.318s 00:07:11.093 user 0m8.611s 00:07:11.093 sys 0m1.934s 00:07:11.093 16:03:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.093 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:07:11.093 ************************************ 00:07:11.093 END TEST nvmf_referrals 00:07:11.093 ************************************ 00:07:11.093 16:03:12 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:11.093 16:03:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:11.093 16:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.093 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:07:11.351 ************************************ 00:07:11.351 START TEST nvmf_connect_disconnect 00:07:11.351 ************************************ 00:07:11.351 16:03:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:11.351 * Looking for test storage... 00:07:11.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.351 16:03:12 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.351 16:03:12 -- nvmf/common.sh@7 -- # uname -s 00:07:11.351 16:03:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.351 16:03:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.351 16:03:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.351 16:03:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.351 16:03:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.351 16:03:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.351 16:03:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.351 16:03:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.351 16:03:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.351 16:03:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.351 16:03:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:11.351 16:03:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:11.351 16:03:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.351 16:03:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.351 16:03:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.351 16:03:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.351 16:03:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.351 16:03:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.351 16:03:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.351 16:03:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.351 16:03:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.351 16:03:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.351 16:03:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.351 16:03:12 -- paths/export.sh@5 -- # export PATH 00:07:11.351 16:03:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.351 16:03:12 -- nvmf/common.sh@47 -- # : 0 00:07:11.351 16:03:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.351 16:03:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.351 16:03:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.351 16:03:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.351 16:03:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.351 16:03:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.351 16:03:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.351 16:03:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.351 16:03:12 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.351 16:03:12 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.351 16:03:12 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:11.351 16:03:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:11.351 16:03:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.351 16:03:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.351 16:03:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.351 16:03:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.351 16:03:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.351 16:03:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.351 16:03:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.351 16:03:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:11.351 16:03:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:11.351 16:03:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.351 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:07:13.250 16:03:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:13.250 16:03:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.250 16:03:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.250 16:03:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.250 16:03:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.250 16:03:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.250 16:03:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.250 16:03:14 -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.250 16:03:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.509 16:03:14 -- nvmf/common.sh@296 -- # e810=() 00:07:13.509 16:03:14 -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.509 16:03:14 -- nvmf/common.sh@297 -- # x722=() 00:07:13.509 16:03:14 -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.509 16:03:14 -- nvmf/common.sh@298 -- # mlx=() 00:07:13.509 16:03:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.509 16:03:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.509 16:03:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.509 16:03:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.509 16:03:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.509 16:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:13.509 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:13.509 16:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.509 16:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:13.509 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:13.509 16:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.509 16:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.509 16:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.509 16:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:13.509 Found net devices under 0000:09:00.0: cvl_0_0 00:07:13.509 16:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.509 16:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.509 16:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.509 16:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.509 16:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:13.509 Found net devices under 0000:09:00.1: cvl_0_1 00:07:13.509 16:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.509 16:03:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:13.509 16:03:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:13.509 16:03:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:13.509 16:03:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.509 16:03:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.509 16:03:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.509 16:03:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.509 16:03:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.509 16:03:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.509 16:03:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.509 16:03:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.509 16:03:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.509 16:03:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.509 16:03:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.509 16:03:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.509 16:03:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.509 16:03:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.510 16:03:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.510 16:03:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.510 16:03:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.510 16:03:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.510 16:03:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.510 16:03:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:07:13.510 00:07:13.510 --- 10.0.0.2 ping statistics --- 00:07:13.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.510 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:13.510 16:03:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:07:13.510 00:07:13.510 --- 10.0.0.1 ping statistics --- 00:07:13.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.510 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:13.510 16:03:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.510 16:03:14 -- nvmf/common.sh@411 -- # return 0 00:07:13.510 16:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.510 16:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.510 16:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:13.510 16:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:13.510 16:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.510 16:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:13.510 16:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:13.510 16:03:14 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:13.510 16:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.510 16:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.510 16:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 16:03:14 -- nvmf/common.sh@470 -- # nvmfpid=3313774 00:07:13.510 16:03:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.510 16:03:14 -- nvmf/common.sh@471 -- # waitforlisten 3313774 00:07:13.510 16:03:14 -- common/autotest_common.sh@817 -- # '[' -z 3313774 ']' 00:07:13.510 16:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.510 16:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.510 16:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.510 16:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.510 16:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 [2024-04-24 16:03:14.744046] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:07:13.510 [2024-04-24 16:03:14.744129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.768 [2024-04-24 16:03:14.814622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.768 [2024-04-24 16:03:14.935697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.768 [2024-04-24 16:03:14.935771] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.768 [2024-04-24 16:03:14.935797] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.768 [2024-04-24 16:03:14.935811] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.768 [2024-04-24 16:03:14.935823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.768 [2024-04-24 16:03:14.935885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.768 [2024-04-24 16:03:14.935941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.768 [2024-04-24 16:03:14.937768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.768 [2024-04-24 16:03:14.937775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.700 16:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.700 16:03:15 -- common/autotest_common.sh@850 -- # return 0 00:07:14.700 16:03:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.700 16:03:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 16:03:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.700 16:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 [2024-04-24 16:03:15.752596] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.700 16:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:14.700 16:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 16:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.700 16:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 16:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.700 16:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 16:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.700 16:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.700 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.700 [2024-04-24 16:03:15.805881] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.700 16:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:14.700 16:03:15 -- target/connect_disconnect.sh@34 -- # set +x 00:07:17.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.078 16:03:29 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:28.078 16:03:29 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:28.078 16:03:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:28.078 16:03:29 -- nvmf/common.sh@117 -- # sync 00:07:28.078 16:03:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.078 16:03:29 -- nvmf/common.sh@120 -- # set +e 00:07:28.078 16:03:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.078 16:03:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.078 rmmod nvme_tcp 00:07:28.078 rmmod nvme_fabrics 00:07:28.078 rmmod nvme_keyring 00:07:28.078 16:03:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.078 16:03:29 -- nvmf/common.sh@124 -- # set -e 00:07:28.078 16:03:29 -- nvmf/common.sh@125 -- # return 0 00:07:28.078 16:03:29 -- nvmf/common.sh@478 -- # '[' -n 3313774 ']' 00:07:28.078 16:03:29 -- nvmf/common.sh@479 -- # killprocess 3313774 00:07:28.078 16:03:29 -- common/autotest_common.sh@936 -- # '[' -z 3313774 ']' 00:07:28.078 16:03:29 -- common/autotest_common.sh@940 -- # kill -0 3313774 00:07:28.078 16:03:29 -- common/autotest_common.sh@941 -- # uname 00:07:28.078 16:03:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.078 16:03:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3313774 00:07:28.078 16:03:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.078 16:03:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.078 16:03:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3313774' 00:07:28.078 killing process with pid 3313774 00:07:28.078 16:03:29 -- common/autotest_common.sh@955 -- # kill 3313774 00:07:28.078 16:03:29 -- common/autotest_common.sh@960 -- # wait 3313774 00:07:28.644 16:03:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:28.644 16:03:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:28.644 16:03:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:28.644 16:03:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.644 16:03:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.644 16:03:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.644 16:03:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.644 16:03:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.548 16:03:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.548 00:07:30.548 real 0m19.221s 00:07:30.548 user 0m58.467s 00:07:30.548 sys 0m3.114s 00:07:30.548 16:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.548 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 ************************************ 00:07:30.548 END TEST nvmf_connect_disconnect 00:07:30.548 ************************************ 00:07:30.548 16:03:31 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:30.548 16:03:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:30.548 16:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.548 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 ************************************ 00:07:30.548 START TEST nvmf_multitarget 00:07:30.548 ************************************ 00:07:30.548 16:03:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:30.806 * Looking for test storage... 00:07:30.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.806 16:03:31 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.806 16:03:31 -- nvmf/common.sh@7 -- # uname -s 00:07:30.806 16:03:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.806 16:03:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.806 16:03:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.806 16:03:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.806 16:03:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.806 16:03:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.806 16:03:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.806 16:03:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.806 16:03:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.806 16:03:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.806 16:03:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.806 16:03:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.806 16:03:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.806 16:03:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.806 16:03:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.806 16:03:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.806 16:03:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.806 16:03:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.806 16:03:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.806 16:03:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.806 16:03:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.806 16:03:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.806 16:03:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.806 16:03:31 -- paths/export.sh@5 -- # export PATH 00:07:30.806 16:03:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.806 16:03:31 -- nvmf/common.sh@47 -- # : 0 00:07:30.806 16:03:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.806 16:03:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.806 16:03:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.806 16:03:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.806 16:03:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.806 16:03:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.806 16:03:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.806 16:03:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.806 16:03:31 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:30.806 16:03:31 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:30.806 16:03:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:30.806 16:03:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.806 16:03:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:30.806 16:03:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:30.806 16:03:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:30.806 16:03:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.806 16:03:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.806 16:03:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.806 16:03:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:30.806 16:03:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:30.806 16:03:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.806 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.740 16:03:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.740 16:03:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.740 16:03:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.740 16:03:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.740 16:03:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.740 16:03:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.740 16:03:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.740 16:03:33 -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.740 16:03:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.740 16:03:33 -- nvmf/common.sh@296 -- # e810=() 00:07:32.740 16:03:33 -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.740 16:03:33 -- nvmf/common.sh@297 -- # x722=() 00:07:32.740 16:03:33 -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.740 16:03:33 -- nvmf/common.sh@298 -- # mlx=() 00:07:32.740 16:03:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.740 16:03:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.740 16:03:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.740 16:03:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.740 16:03:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.740 16:03:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:32.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:32.740 16:03:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.740 16:03:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:32.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:32.740 16:03:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.740 16:03:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.740 16:03:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.740 16:03:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:32.740 Found net devices under 0000:09:00.0: cvl_0_0 00:07:32.740 16:03:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.740 16:03:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.740 16:03:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.740 16:03:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.740 16:03:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:32.740 Found net devices under 0000:09:00.1: cvl_0_1 00:07:32.740 16:03:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.740 16:03:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:32.740 16:03:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:32.740 16:03:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:32.740 16:03:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.740 16:03:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.740 16:03:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.740 16:03:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.740 16:03:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.740 16:03:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.740 16:03:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.740 16:03:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.740 16:03:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.740 16:03:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.740 16:03:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.740 16:03:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.740 16:03:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.021 16:03:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.021 16:03:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.021 16:03:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.021 16:03:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.021 16:03:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.021 16:03:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.021 16:03:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:07:33.021 00:07:33.021 --- 10.0.0.2 ping statistics --- 00:07:33.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.021 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:07:33.021 16:03:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:07:33.021 00:07:33.021 --- 10.0.0.1 ping statistics --- 00:07:33.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.021 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:33.021 16:03:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.021 16:03:34 -- nvmf/common.sh@411 -- # return 0 00:07:33.021 16:03:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:33.021 16:03:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.021 16:03:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:33.021 16:03:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:33.021 16:03:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.021 16:03:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:33.021 16:03:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:33.021 16:03:34 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:33.021 16:03:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:33.021 16:03:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:33.021 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:33.021 16:03:34 -- nvmf/common.sh@470 -- # nvmfpid=3317552 00:07:33.021 16:03:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.021 16:03:34 -- nvmf/common.sh@471 -- # waitforlisten 3317552 00:07:33.021 16:03:34 -- common/autotest_common.sh@817 -- # '[' -z 3317552 ']' 00:07:33.021 16:03:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.021 16:03:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:33.021 16:03:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.021 16:03:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:33.021 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:33.021 [2024-04-24 16:03:34.187159] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:07:33.021 [2024-04-24 16:03:34.187245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.021 [2024-04-24 16:03:34.258489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.278 [2024-04-24 16:03:34.378067] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.278 [2024-04-24 16:03:34.378148] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.278 [2024-04-24 16:03:34.378164] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.278 [2024-04-24 16:03:34.378178] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.278 [2024-04-24 16:03:34.378189] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.278 [2024-04-24 16:03:34.378450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.278 [2024-04-24 16:03:34.378505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.279 [2024-04-24 16:03:34.378555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.279 [2024-04-24 16:03:34.378559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.210 16:03:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:34.210 16:03:35 -- common/autotest_common.sh@850 -- # return 0 00:07:34.210 16:03:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:34.210 16:03:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:34.210 16:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.210 16:03:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.210 16:03:35 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:34.210 16:03:35 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:34.210 16:03:35 -- target/multitarget.sh@21 -- # jq length 00:07:34.210 16:03:35 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:34.210 16:03:35 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:34.210 "nvmf_tgt_1" 00:07:34.210 16:03:35 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:34.468 "nvmf_tgt_2" 00:07:34.468 16:03:35 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:34.468 16:03:35 -- target/multitarget.sh@28 -- # jq length 00:07:34.468 16:03:35 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:34.468 16:03:35 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:34.468 true 00:07:34.468 16:03:35 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:34.726 true 00:07:34.726 16:03:35 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:34.726 16:03:35 -- target/multitarget.sh@35 -- # jq length 00:07:34.726 16:03:35 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:34.726 16:03:35 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:34.726 16:03:35 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:34.726 16:03:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:34.726 16:03:35 -- nvmf/common.sh@117 -- # sync 00:07:34.726 16:03:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.726 16:03:35 -- nvmf/common.sh@120 -- # set +e 00:07:34.726 16:03:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.726 16:03:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.726 rmmod nvme_tcp 00:07:34.726 rmmod nvme_fabrics 00:07:34.726 rmmod nvme_keyring 00:07:34.726 16:03:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.726 16:03:35 -- nvmf/common.sh@124 -- # set -e 00:07:34.726 16:03:35 -- nvmf/common.sh@125 -- # return 0 00:07:34.726 16:03:35 -- nvmf/common.sh@478 -- # '[' -n 3317552 ']' 00:07:34.726 16:03:35 -- nvmf/common.sh@479 -- # killprocess 3317552 00:07:34.726 16:03:35 -- common/autotest_common.sh@936 -- # '[' -z 3317552 ']' 00:07:34.726 16:03:35 -- common/autotest_common.sh@940 -- # kill -0 3317552 00:07:34.726 16:03:35 -- common/autotest_common.sh@941 -- # uname 00:07:34.726 16:03:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.726 16:03:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3317552 00:07:34.984 16:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.984 16:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.984 16:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3317552' 00:07:34.984 killing process with pid 3317552 00:07:34.984 16:03:36 -- common/autotest_common.sh@955 -- # kill 3317552 00:07:34.984 16:03:36 -- common/autotest_common.sh@960 -- # wait 3317552 00:07:35.242 16:03:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:35.242 16:03:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:35.242 16:03:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:35.242 16:03:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.242 16:03:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.242 16:03:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.242 16:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.242 16:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.145 16:03:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.145 00:07:37.145 real 0m6.513s 00:07:37.145 user 0m9.150s 00:07:37.145 sys 0m2.065s 00:07:37.145 16:03:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.145 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.145 ************************************ 00:07:37.145 END TEST nvmf_multitarget 00:07:37.145 ************************************ 00:07:37.145 16:03:38 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:37.145 16:03:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:37.145 16:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.145 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.403 ************************************ 00:07:37.403 START TEST nvmf_rpc 00:07:37.403 ************************************ 00:07:37.403 16:03:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:37.403 * Looking for test storage... 00:07:37.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.403 16:03:38 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.403 16:03:38 -- nvmf/common.sh@7 -- # uname -s 00:07:37.403 16:03:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.403 16:03:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.403 16:03:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.403 16:03:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.403 16:03:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.403 16:03:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.403 16:03:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.403 16:03:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.403 16:03:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.403 16:03:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.403 16:03:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.403 16:03:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.403 16:03:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.403 16:03:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.403 16:03:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.403 16:03:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.403 16:03:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.403 16:03:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.403 16:03:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.403 16:03:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.403 16:03:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.404 16:03:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.404 16:03:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.404 16:03:38 -- paths/export.sh@5 -- # export PATH 00:07:37.404 16:03:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.404 16:03:38 -- nvmf/common.sh@47 -- # : 0 00:07:37.404 16:03:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.404 16:03:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.404 16:03:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.404 16:03:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.404 16:03:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.404 16:03:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.404 16:03:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.404 16:03:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.404 16:03:38 -- target/rpc.sh@11 -- # loops=5 00:07:37.404 16:03:38 -- target/rpc.sh@23 -- # nvmftestinit 00:07:37.404 16:03:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:37.404 16:03:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.404 16:03:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:37.404 16:03:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:37.404 16:03:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:37.404 16:03:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.404 16:03:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.404 16:03:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.404 16:03:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:37.404 16:03:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:37.404 16:03:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.404 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:39.306 16:03:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:39.306 16:03:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.306 16:03:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.306 16:03:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.306 16:03:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.306 16:03:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.306 16:03:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.306 16:03:40 -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.306 16:03:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.306 16:03:40 -- nvmf/common.sh@296 -- # e810=() 00:07:39.306 16:03:40 -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.306 16:03:40 -- nvmf/common.sh@297 -- # x722=() 00:07:39.306 16:03:40 -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.306 16:03:40 -- nvmf/common.sh@298 -- # mlx=() 00:07:39.306 16:03:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.306 16:03:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.306 16:03:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.306 16:03:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.306 16:03:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.306 16:03:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.306 16:03:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.306 16:03:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.306 16:03:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.306 16:03:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:39.306 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:39.306 16:03:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.306 16:03:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.306 16:03:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.307 16:03:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:39.307 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:39.307 16:03:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.307 16:03:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.307 16:03:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.307 16:03:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:39.307 16:03:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.307 16:03:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:39.307 Found net devices under 0000:09:00.0: cvl_0_0 00:07:39.307 16:03:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.307 16:03:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.307 16:03:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.307 16:03:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:39.307 16:03:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.307 16:03:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:39.307 Found net devices under 0000:09:00.1: cvl_0_1 00:07:39.307 16:03:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.307 16:03:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:39.307 16:03:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:39.307 16:03:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:39.307 16:03:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:39.307 16:03:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.307 16:03:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.307 16:03:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.307 16:03:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.307 16:03:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.307 16:03:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.307 16:03:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.307 16:03:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.307 16:03:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.307 16:03:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.307 16:03:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.307 16:03:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.307 16:03:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.307 16:03:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.307 16:03:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.307 16:03:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.307 16:03:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.565 16:03:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.565 16:03:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.565 16:03:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:39.565 00:07:39.565 --- 10.0.0.2 ping statistics --- 00:07:39.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.565 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:39.565 16:03:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:39.565 00:07:39.565 --- 10.0.0.1 ping statistics --- 00:07:39.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.565 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:39.565 16:03:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.565 16:03:40 -- nvmf/common.sh@411 -- # return 0 00:07:39.565 16:03:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:39.565 16:03:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.565 16:03:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:39.565 16:03:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:39.565 16:03:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.565 16:03:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:39.565 16:03:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:39.565 16:03:40 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:39.565 16:03:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:39.565 16:03:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:39.565 16:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.565 16:03:40 -- nvmf/common.sh@470 -- # nvmfpid=3319699 00:07:39.565 16:03:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.565 16:03:40 -- nvmf/common.sh@471 -- # waitforlisten 3319699 00:07:39.565 16:03:40 -- common/autotest_common.sh@817 -- # '[' -z 3319699 ']' 00:07:39.565 16:03:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.565 16:03:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:39.565 16:03:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.565 16:03:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:39.565 16:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.565 [2024-04-24 16:03:40.689902] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:07:39.565 [2024-04-24 16:03:40.689977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.565 [2024-04-24 16:03:40.761498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.823 [2024-04-24 16:03:40.881142] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.823 [2024-04-24 16:03:40.881206] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.823 [2024-04-24 16:03:40.881224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.823 [2024-04-24 16:03:40.881239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.823 [2024-04-24 16:03:40.881251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.823 [2024-04-24 16:03:40.881347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.823 [2024-04-24 16:03:40.881407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.823 [2024-04-24 16:03:40.881463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.823 [2024-04-24 16:03:40.881468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.387 16:03:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:40.387 16:03:41 -- common/autotest_common.sh@850 -- # return 0 00:07:40.387 16:03:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:40.387 16:03:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:40.387 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.388 16:03:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.388 16:03:41 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:40.388 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.388 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.388 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.388 16:03:41 -- target/rpc.sh@26 -- # stats='{ 00:07:40.388 "tick_rate": 2700000000, 00:07:40.388 "poll_groups": [ 00:07:40.388 { 00:07:40.388 "name": "nvmf_tgt_poll_group_0", 00:07:40.388 "admin_qpairs": 0, 00:07:40.388 "io_qpairs": 0, 00:07:40.388 "current_admin_qpairs": 0, 00:07:40.388 "current_io_qpairs": 0, 00:07:40.388 "pending_bdev_io": 0, 00:07:40.388 "completed_nvme_io": 0, 00:07:40.388 "transports": [] 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": "nvmf_tgt_poll_group_1", 00:07:40.388 "admin_qpairs": 0, 00:07:40.388 "io_qpairs": 0, 00:07:40.388 "current_admin_qpairs": 0, 00:07:40.388 "current_io_qpairs": 0, 00:07:40.388 "pending_bdev_io": 0, 00:07:40.388 "completed_nvme_io": 0, 00:07:40.388 "transports": [] 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": "nvmf_tgt_poll_group_2", 00:07:40.388 "admin_qpairs": 0, 00:07:40.388 "io_qpairs": 0, 00:07:40.388 "current_admin_qpairs": 0, 00:07:40.388 "current_io_qpairs": 0, 00:07:40.388 "pending_bdev_io": 0, 00:07:40.388 "completed_nvme_io": 0, 00:07:40.388 "transports": [] 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": "nvmf_tgt_poll_group_3", 00:07:40.388 "admin_qpairs": 0, 00:07:40.388 "io_qpairs": 0, 00:07:40.388 "current_admin_qpairs": 0, 00:07:40.388 "current_io_qpairs": 0, 00:07:40.388 "pending_bdev_io": 0, 00:07:40.388 "completed_nvme_io": 0, 00:07:40.388 "transports": [] 00:07:40.388 } 00:07:40.388 ] 00:07:40.388 }' 00:07:40.646 16:03:41 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:40.646 16:03:41 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:40.646 16:03:41 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:40.646 16:03:41 -- target/rpc.sh@15 -- # wc -l 00:07:40.646 16:03:41 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:40.646 16:03:41 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:40.646 16:03:41 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:40.646 16:03:41 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 [2024-04-24 16:03:41.753046] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@33 -- # stats='{ 00:07:40.646 "tick_rate": 2700000000, 00:07:40.646 "poll_groups": [ 00:07:40.646 { 00:07:40.646 "name": "nvmf_tgt_poll_group_0", 00:07:40.646 "admin_qpairs": 0, 00:07:40.646 "io_qpairs": 0, 00:07:40.646 "current_admin_qpairs": 0, 00:07:40.646 "current_io_qpairs": 0, 00:07:40.646 "pending_bdev_io": 0, 00:07:40.646 "completed_nvme_io": 0, 00:07:40.646 "transports": [ 00:07:40.646 { 00:07:40.646 "trtype": "TCP" 00:07:40.646 } 00:07:40.646 ] 00:07:40.646 }, 00:07:40.646 { 00:07:40.646 "name": "nvmf_tgt_poll_group_1", 00:07:40.646 "admin_qpairs": 0, 00:07:40.646 "io_qpairs": 0, 00:07:40.646 "current_admin_qpairs": 0, 00:07:40.646 "current_io_qpairs": 0, 00:07:40.646 "pending_bdev_io": 0, 00:07:40.646 "completed_nvme_io": 0, 00:07:40.646 "transports": [ 00:07:40.646 { 00:07:40.646 "trtype": "TCP" 00:07:40.646 } 00:07:40.646 ] 00:07:40.646 }, 00:07:40.646 { 00:07:40.646 "name": "nvmf_tgt_poll_group_2", 00:07:40.646 "admin_qpairs": 0, 00:07:40.646 "io_qpairs": 0, 00:07:40.646 "current_admin_qpairs": 0, 00:07:40.646 "current_io_qpairs": 0, 00:07:40.646 "pending_bdev_io": 0, 00:07:40.646 "completed_nvme_io": 0, 00:07:40.646 "transports": [ 00:07:40.646 { 00:07:40.646 "trtype": "TCP" 00:07:40.646 } 00:07:40.646 ] 00:07:40.646 }, 00:07:40.646 { 00:07:40.646 "name": "nvmf_tgt_poll_group_3", 00:07:40.646 "admin_qpairs": 0, 00:07:40.646 "io_qpairs": 0, 00:07:40.646 "current_admin_qpairs": 0, 00:07:40.646 "current_io_qpairs": 0, 00:07:40.646 "pending_bdev_io": 0, 00:07:40.646 "completed_nvme_io": 0, 00:07:40.646 "transports": [ 00:07:40.646 { 00:07:40.646 "trtype": "TCP" 00:07:40.646 } 00:07:40.646 ] 00:07:40.646 } 00:07:40.646 ] 00:07:40.646 }' 00:07:40.646 16:03:41 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.646 16:03:41 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:40.646 16:03:41 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:40.646 16:03:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.646 16:03:41 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:40.646 16:03:41 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:40.646 16:03:41 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:40.646 16:03:41 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:40.646 16:03:41 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 Malloc1 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.646 [2024-04-24 16:03:41.902421] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.646 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.646 16:03:41 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:40.646 16:03:41 -- common/autotest_common.sh@638 -- # local es=0 00:07:40.646 16:03:41 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:40.646 16:03:41 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:40.646 16:03:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:40.646 16:03:41 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:40.646 16:03:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:40.646 16:03:41 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:40.646 16:03:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:40.646 16:03:41 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:40.646 16:03:41 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:40.646 16:03:41 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:40.646 [2024-04-24 16:03:41.924922] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:40.646 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:40.646 could not add new controller: failed to write to nvme-fabrics device 00:07:40.646 16:03:41 -- common/autotest_common.sh@641 -- # es=1 00:07:40.646 16:03:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:40.646 16:03:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:40.646 16:03:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:40.646 16:03:41 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.646 16:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.646 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.904 16:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.904 16:03:41 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.470 16:03:42 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.470 16:03:42 -- common/autotest_common.sh@1184 -- # local i=0 00:07:41.470 16:03:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.470 16:03:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:41.470 16:03:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:43.366 16:03:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:43.366 16:03:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:43.366 16:03:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.366 16:03:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:43.367 16:03:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.367 16:03:44 -- common/autotest_common.sh@1194 -- # return 0 00:07:43.367 16:03:44 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.367 16:03:44 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.367 16:03:44 -- common/autotest_common.sh@1205 -- # local i=0 00:07:43.367 16:03:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:43.367 16:03:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.367 16:03:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:43.367 16:03:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.624 16:03:44 -- common/autotest_common.sh@1217 -- # return 0 00:07:43.624 16:03:44 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:43.624 16:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.624 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.624 16:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.624 16:03:44 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.624 16:03:44 -- common/autotest_common.sh@638 -- # local es=0 00:07:43.624 16:03:44 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.624 16:03:44 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:43.624 16:03:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:43.624 16:03:44 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:43.624 16:03:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:43.624 16:03:44 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:43.624 16:03:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:43.624 16:03:44 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:43.624 16:03:44 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:43.624 16:03:44 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.624 [2024-04-24 16:03:44.683807] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:43.624 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:43.624 could not add new controller: failed to write to nvme-fabrics device 00:07:43.624 16:03:44 -- common/autotest_common.sh@641 -- # es=1 00:07:43.624 16:03:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:43.624 16:03:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:43.624 16:03:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:43.624 16:03:44 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:43.624 16:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.624 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.624 16:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.624 16:03:44 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.189 16:03:45 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.189 16:03:45 -- common/autotest_common.sh@1184 -- # local i=0 00:07:44.189 16:03:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.189 16:03:45 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:44.189 16:03:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:46.085 16:03:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:46.085 16:03:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:46.085 16:03:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.085 16:03:47 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:46.085 16:03:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.085 16:03:47 -- common/autotest_common.sh@1194 -- # return 0 00:07:46.085 16:03:47 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.343 16:03:47 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.343 16:03:47 -- common/autotest_common.sh@1205 -- # local i=0 00:07:46.343 16:03:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:46.343 16:03:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.343 16:03:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:46.343 16:03:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.343 16:03:47 -- common/autotest_common.sh@1217 -- # return 0 00:07:46.343 16:03:47 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.343 16:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.343 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.343 16:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.343 16:03:47 -- target/rpc.sh@81 -- # seq 1 5 00:07:46.343 16:03:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:46.343 16:03:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:46.343 16:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.343 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.343 16:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.343 16:03:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.343 16:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.343 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.343 [2024-04-24 16:03:47.432378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.343 16:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.343 16:03:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:46.343 16:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.343 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.343 16:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.343 16:03:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:46.343 16:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.343 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.343 16:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.343 16:03:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.909 16:03:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.909 16:03:48 -- common/autotest_common.sh@1184 -- # local i=0 00:07:46.909 16:03:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.909 16:03:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:46.909 16:03:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:49.430 16:03:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:49.430 16:03:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:49.430 16:03:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.430 16:03:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:49.430 16:03:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.430 16:03:50 -- common/autotest_common.sh@1194 -- # return 0 00:07:49.430 16:03:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:49.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.430 16:03:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:49.430 16:03:50 -- common/autotest_common.sh@1205 -- # local i=0 00:07:49.430 16:03:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:49.430 16:03:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.430 16:03:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:49.430 16:03:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.430 16:03:50 -- common/autotest_common.sh@1217 -- # return 0 00:07:49.430 16:03:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.430 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.430 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.430 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.430 16:03:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.430 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.431 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.431 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.431 16:03:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:49.431 16:03:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:49.431 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.431 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.431 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.431 16:03:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.431 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.431 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.431 [2024-04-24 16:03:50.197093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.431 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.431 16:03:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:49.431 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.431 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.431 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.431 16:03:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:49.431 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.431 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.431 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.431 16:03:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.688 16:03:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.688 16:03:50 -- common/autotest_common.sh@1184 -- # local i=0 00:07:49.688 16:03:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.688 16:03:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:49.688 16:03:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:51.586 16:03:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:51.586 16:03:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:51.586 16:03:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.586 16:03:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:51.586 16:03:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.586 16:03:52 -- common/autotest_common.sh@1194 -- # return 0 00:07:51.586 16:03:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.586 16:03:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.586 16:03:52 -- common/autotest_common.sh@1205 -- # local i=0 00:07:51.844 16:03:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:51.844 16:03:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.844 16:03:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:51.844 16:03:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.844 16:03:52 -- common/autotest_common.sh@1217 -- # return 0 00:07:51.844 16:03:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:51.844 16:03:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 [2024-04-24 16:03:52.919997] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.844 16:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.844 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 16:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.844 16:03:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:52.408 16:03:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.408 16:03:53 -- common/autotest_common.sh@1184 -- # local i=0 00:07:52.408 16:03:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.408 16:03:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:52.408 16:03:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:54.305 16:03:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:54.305 16:03:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:54.305 16:03:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.305 16:03:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:54.305 16:03:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.305 16:03:55 -- common/autotest_common.sh@1194 -- # return 0 00:07:54.305 16:03:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.305 16:03:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.305 16:03:55 -- common/autotest_common.sh@1205 -- # local i=0 00:07:54.305 16:03:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:54.305 16:03:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.305 16:03:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:54.305 16:03:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.305 16:03:55 -- common/autotest_common.sh@1217 -- # return 0 00:07:54.305 16:03:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.305 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.305 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.305 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.306 16:03:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.306 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.306 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.563 16:03:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:54.563 16:03:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:54.563 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.563 16:03:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.563 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 [2024-04-24 16:03:55.604552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.563 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.563 16:03:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:54.563 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.563 16:03:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:54.563 16:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 16:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.563 16:03:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.128 16:03:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.128 16:03:56 -- common/autotest_common.sh@1184 -- # local i=0 00:07:55.128 16:03:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.128 16:03:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:55.128 16:03:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:57.026 16:03:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:57.026 16:03:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:57.027 16:03:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.027 16:03:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:57.027 16:03:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.027 16:03:58 -- common/autotest_common.sh@1194 -- # return 0 00:07:57.027 16:03:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.285 16:03:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.285 16:03:58 -- common/autotest_common.sh@1205 -- # local i=0 00:07:57.285 16:03:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:57.285 16:03:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.285 16:03:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:57.285 16:03:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.285 16:03:58 -- common/autotest_common.sh@1217 -- # return 0 00:07:57.285 16:03:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:57.285 16:03:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 [2024-04-24 16:03:58.378131] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:57.285 16:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.285 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:57.285 16:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.285 16:03:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.850 16:03:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.850 16:03:58 -- common/autotest_common.sh@1184 -- # local i=0 00:07:57.850 16:03:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.850 16:03:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:57.850 16:03:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:59.747 16:04:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:59.747 16:04:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:59.747 16:04:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.747 16:04:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:59.747 16:04:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.747 16:04:00 -- common/autotest_common.sh@1194 -- # return 0 00:07:59.747 16:04:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.005 16:04:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.005 16:04:01 -- common/autotest_common.sh@1205 -- # local i=0 00:08:00.005 16:04:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:00.005 16:04:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.005 16:04:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:00.005 16:04:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.005 16:04:01 -- common/autotest_common.sh@1217 -- # return 0 00:08:00.005 16:04:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.005 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.005 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.005 16:04:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.005 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.005 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.005 16:04:01 -- target/rpc.sh@99 -- # seq 1 5 00:08:00.005 16:04:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.005 16:04:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.005 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.005 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.005 16:04:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.005 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.005 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 [2024-04-24 16:04:01.103706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.005 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.005 16:04:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.005 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.006 16:04:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 [2024-04-24 16:04:01.151812] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.006 16:04:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 [2024-04-24 16:04:01.199998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.006 16:04:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 [2024-04-24 16:04:01.248191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.006 16:04:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.006 16:04:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.006 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.006 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 [2024-04-24 16:04:01.296348] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:00.265 16:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.265 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.265 16:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.265 16:04:01 -- target/rpc.sh@110 -- # stats='{ 00:08:00.265 "tick_rate": 2700000000, 00:08:00.265 "poll_groups": [ 00:08:00.265 { 00:08:00.265 "name": "nvmf_tgt_poll_group_0", 00:08:00.265 "admin_qpairs": 2, 00:08:00.265 "io_qpairs": 84, 00:08:00.265 "current_admin_qpairs": 0, 00:08:00.265 "current_io_qpairs": 0, 00:08:00.265 "pending_bdev_io": 0, 00:08:00.265 "completed_nvme_io": 185, 00:08:00.265 "transports": [ 00:08:00.265 { 00:08:00.265 "trtype": "TCP" 00:08:00.265 } 00:08:00.265 ] 00:08:00.265 }, 00:08:00.265 { 00:08:00.265 "name": "nvmf_tgt_poll_group_1", 00:08:00.265 "admin_qpairs": 2, 00:08:00.265 "io_qpairs": 84, 00:08:00.265 "current_admin_qpairs": 0, 00:08:00.265 "current_io_qpairs": 0, 00:08:00.265 "pending_bdev_io": 0, 00:08:00.265 "completed_nvme_io": 184, 00:08:00.265 "transports": [ 00:08:00.265 { 00:08:00.265 "trtype": "TCP" 00:08:00.265 } 00:08:00.265 ] 00:08:00.265 }, 00:08:00.265 { 00:08:00.265 "name": "nvmf_tgt_poll_group_2", 00:08:00.265 "admin_qpairs": 1, 00:08:00.265 "io_qpairs": 84, 00:08:00.265 "current_admin_qpairs": 0, 00:08:00.265 "current_io_qpairs": 0, 00:08:00.265 "pending_bdev_io": 0, 00:08:00.265 "completed_nvme_io": 183, 00:08:00.265 "transports": [ 00:08:00.265 { 00:08:00.265 "trtype": "TCP" 00:08:00.265 } 00:08:00.265 ] 00:08:00.265 }, 00:08:00.265 { 00:08:00.265 "name": "nvmf_tgt_poll_group_3", 00:08:00.265 "admin_qpairs": 2, 00:08:00.265 "io_qpairs": 84, 00:08:00.265 "current_admin_qpairs": 0, 00:08:00.265 "current_io_qpairs": 0, 00:08:00.265 "pending_bdev_io": 0, 00:08:00.265 "completed_nvme_io": 134, 00:08:00.265 "transports": [ 00:08:00.265 { 00:08:00.265 "trtype": "TCP" 00:08:00.265 } 00:08:00.265 ] 00:08:00.265 } 00:08:00.265 ] 00:08:00.265 }' 00:08:00.265 16:04:01 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:00.265 16:04:01 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:00.265 16:04:01 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:00.265 16:04:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:00.265 16:04:01 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:00.265 16:04:01 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:00.265 16:04:01 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:00.265 16:04:01 -- target/rpc.sh@123 -- # nvmftestfini 00:08:00.265 16:04:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:00.265 16:04:01 -- nvmf/common.sh@117 -- # sync 00:08:00.265 16:04:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.265 16:04:01 -- nvmf/common.sh@120 -- # set +e 00:08:00.265 16:04:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.265 16:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.265 rmmod nvme_tcp 00:08:00.265 rmmod nvme_fabrics 00:08:00.265 rmmod nvme_keyring 00:08:00.265 16:04:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.265 16:04:01 -- nvmf/common.sh@124 -- # set -e 00:08:00.265 16:04:01 -- nvmf/common.sh@125 -- # return 0 00:08:00.265 16:04:01 -- nvmf/common.sh@478 -- # '[' -n 3319699 ']' 00:08:00.265 16:04:01 -- nvmf/common.sh@479 -- # killprocess 3319699 00:08:00.265 16:04:01 -- common/autotest_common.sh@936 -- # '[' -z 3319699 ']' 00:08:00.265 16:04:01 -- common/autotest_common.sh@940 -- # kill -0 3319699 00:08:00.265 16:04:01 -- common/autotest_common.sh@941 -- # uname 00:08:00.265 16:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.265 16:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3319699 00:08:00.265 16:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.265 16:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.265 16:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3319699' 00:08:00.265 killing process with pid 3319699 00:08:00.265 16:04:01 -- common/autotest_common.sh@955 -- # kill 3319699 00:08:00.265 16:04:01 -- common/autotest_common.sh@960 -- # wait 3319699 00:08:00.832 16:04:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.832 16:04:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:00.832 16:04:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:00.832 16:04:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.832 16:04:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.832 16:04:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.832 16:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.832 16:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.736 16:04:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.736 00:08:02.736 real 0m25.408s 00:08:02.736 user 1m23.019s 00:08:02.736 sys 0m3.748s 00:08:02.736 16:04:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.736 16:04:03 -- common/autotest_common.sh@10 -- # set +x 00:08:02.736 ************************************ 00:08:02.736 END TEST nvmf_rpc 00:08:02.736 ************************************ 00:08:02.736 16:04:03 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:02.736 16:04:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.736 16:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.736 16:04:03 -- common/autotest_common.sh@10 -- # set +x 00:08:02.736 ************************************ 00:08:02.736 START TEST nvmf_invalid 00:08:02.736 ************************************ 00:08:02.736 16:04:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:02.994 * Looking for test storage... 00:08:02.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.994 16:04:04 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.994 16:04:04 -- nvmf/common.sh@7 -- # uname -s 00:08:02.994 16:04:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.994 16:04:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.995 16:04:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.995 16:04:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.995 16:04:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.995 16:04:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.995 16:04:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.995 16:04:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.995 16:04:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.995 16:04:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.995 16:04:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:02.995 16:04:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:02.995 16:04:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.995 16:04:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.995 16:04:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.995 16:04:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.995 16:04:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.995 16:04:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.995 16:04:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.995 16:04:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.995 16:04:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.995 16:04:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.995 16:04:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.995 16:04:04 -- paths/export.sh@5 -- # export PATH 00:08:02.995 16:04:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.995 16:04:04 -- nvmf/common.sh@47 -- # : 0 00:08:02.995 16:04:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.995 16:04:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.995 16:04:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.995 16:04:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.995 16:04:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.995 16:04:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.995 16:04:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.995 16:04:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.995 16:04:04 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:02.995 16:04:04 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.995 16:04:04 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:02.995 16:04:04 -- target/invalid.sh@14 -- # target=foobar 00:08:02.995 16:04:04 -- target/invalid.sh@16 -- # RANDOM=0 00:08:02.995 16:04:04 -- target/invalid.sh@34 -- # nvmftestinit 00:08:02.995 16:04:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:02.995 16:04:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.995 16:04:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:02.995 16:04:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:02.995 16:04:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:02.995 16:04:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.995 16:04:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.995 16:04:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.995 16:04:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:02.995 16:04:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:02.995 16:04:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.995 16:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:04.897 16:04:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.897 16:04:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.897 16:04:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.897 16:04:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.897 16:04:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.897 16:04:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.897 16:04:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.897 16:04:05 -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.897 16:04:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.897 16:04:05 -- nvmf/common.sh@296 -- # e810=() 00:08:04.897 16:04:05 -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.897 16:04:05 -- nvmf/common.sh@297 -- # x722=() 00:08:04.897 16:04:05 -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.897 16:04:05 -- nvmf/common.sh@298 -- # mlx=() 00:08:04.897 16:04:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.897 16:04:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.897 16:04:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.897 16:04:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.897 16:04:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.897 16:04:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.897 16:04:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.898 16:04:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.898 16:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:04.898 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:04.898 16:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.898 16:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:04.898 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:04.898 16:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.898 16:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.898 16:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.898 16:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:04.898 Found net devices under 0000:09:00.0: cvl_0_0 00:08:04.898 16:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.898 16:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.898 16:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.898 16:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.898 16:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:04.898 Found net devices under 0000:09:00.1: cvl_0_1 00:08:04.898 16:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.898 16:04:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:04.898 16:04:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:04.898 16:04:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:04.898 16:04:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.898 16:04:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.898 16:04:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.898 16:04:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.898 16:04:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.898 16:04:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.898 16:04:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.898 16:04:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.898 16:04:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.898 16:04:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.898 16:04:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.898 16:04:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.898 16:04:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.898 16:04:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.898 16:04:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.898 16:04:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.898 16:04:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.898 16:04:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.898 16:04:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.898 16:04:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:08:04.898 00:08:04.898 --- 10.0.0.2 ping statistics --- 00:08:04.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.898 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:04.898 16:04:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:08:04.898 00:08:04.898 --- 10.0.0.1 ping statistics --- 00:08:04.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.898 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:04.898 16:04:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.898 16:04:06 -- nvmf/common.sh@411 -- # return 0 00:08:04.898 16:04:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:04.898 16:04:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.898 16:04:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:04.898 16:04:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:04.898 16:04:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.898 16:04:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:04.898 16:04:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:04.898 16:04:06 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:04.898 16:04:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:04.898 16:04:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:04.898 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:04.898 16:04:06 -- nvmf/common.sh@470 -- # nvmfpid=3324306 00:08:04.898 16:04:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.898 16:04:06 -- nvmf/common.sh@471 -- # waitforlisten 3324306 00:08:04.898 16:04:06 -- common/autotest_common.sh@817 -- # '[' -z 3324306 ']' 00:08:04.898 16:04:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.898 16:04:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.898 16:04:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.898 16:04:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.898 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:04.898 [2024-04-24 16:04:06.155388] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:08:04.898 [2024-04-24 16:04:06.155467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.190 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.190 [2024-04-24 16:04:06.222925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.190 [2024-04-24 16:04:06.339234] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.190 [2024-04-24 16:04:06.339300] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.190 [2024-04-24 16:04:06.339316] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.190 [2024-04-24 16:04:06.339330] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.190 [2024-04-24 16:04:06.339342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.190 [2024-04-24 16:04:06.339436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.190 [2024-04-24 16:04:06.339488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.190 [2024-04-24 16:04:06.339541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.190 [2024-04-24 16:04:06.339545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.473 16:04:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.473 16:04:06 -- common/autotest_common.sh@850 -- # return 0 00:08:05.473 16:04:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:05.473 16:04:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:05.473 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 16:04:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.473 16:04:06 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:05.473 16:04:06 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode571 00:08:05.473 [2024-04-24 16:04:06.758545] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:05.730 16:04:06 -- target/invalid.sh@40 -- # out='request: 00:08:05.730 { 00:08:05.730 "nqn": "nqn.2016-06.io.spdk:cnode571", 00:08:05.730 "tgt_name": "foobar", 00:08:05.730 "method": "nvmf_create_subsystem", 00:08:05.730 "req_id": 1 00:08:05.730 } 00:08:05.730 Got JSON-RPC error response 00:08:05.730 response: 00:08:05.730 { 00:08:05.730 "code": -32603, 00:08:05.730 "message": "Unable to find target foobar" 00:08:05.730 }' 00:08:05.730 16:04:06 -- target/invalid.sh@41 -- # [[ request: 00:08:05.730 { 00:08:05.730 "nqn": "nqn.2016-06.io.spdk:cnode571", 00:08:05.730 "tgt_name": "foobar", 00:08:05.730 "method": "nvmf_create_subsystem", 00:08:05.730 "req_id": 1 00:08:05.730 } 00:08:05.730 Got JSON-RPC error response 00:08:05.730 response: 00:08:05.730 { 00:08:05.730 "code": -32603, 00:08:05.730 "message": "Unable to find target foobar" 00:08:05.730 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:05.730 16:04:06 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:05.730 16:04:06 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30037 00:08:05.988 [2024-04-24 16:04:07.023431] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30037: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:05.988 16:04:07 -- target/invalid.sh@45 -- # out='request: 00:08:05.988 { 00:08:05.988 "nqn": "nqn.2016-06.io.spdk:cnode30037", 00:08:05.988 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:05.988 "method": "nvmf_create_subsystem", 00:08:05.988 "req_id": 1 00:08:05.988 } 00:08:05.988 Got JSON-RPC error response 00:08:05.988 response: 00:08:05.988 { 00:08:05.988 "code": -32602, 00:08:05.988 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:05.988 }' 00:08:05.988 16:04:07 -- target/invalid.sh@46 -- # [[ request: 00:08:05.988 { 00:08:05.988 "nqn": "nqn.2016-06.io.spdk:cnode30037", 00:08:05.988 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:05.988 "method": "nvmf_create_subsystem", 00:08:05.988 "req_id": 1 00:08:05.988 } 00:08:05.988 Got JSON-RPC error response 00:08:05.988 response: 00:08:05.988 { 00:08:05.988 "code": -32602, 00:08:05.988 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:05.988 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:05.988 16:04:07 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:05.988 16:04:07 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29809 00:08:05.988 [2024-04-24 16:04:07.268212] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29809: invalid model number 'SPDK_Controller' 00:08:06.247 16:04:07 -- target/invalid.sh@50 -- # out='request: 00:08:06.247 { 00:08:06.247 "nqn": "nqn.2016-06.io.spdk:cnode29809", 00:08:06.247 "model_number": "SPDK_Controller\u001f", 00:08:06.247 "method": "nvmf_create_subsystem", 00:08:06.247 "req_id": 1 00:08:06.247 } 00:08:06.247 Got JSON-RPC error response 00:08:06.247 response: 00:08:06.247 { 00:08:06.247 "code": -32602, 00:08:06.247 "message": "Invalid MN SPDK_Controller\u001f" 00:08:06.247 }' 00:08:06.247 16:04:07 -- target/invalid.sh@51 -- # [[ request: 00:08:06.247 { 00:08:06.247 "nqn": "nqn.2016-06.io.spdk:cnode29809", 00:08:06.247 "model_number": "SPDK_Controller\u001f", 00:08:06.247 "method": "nvmf_create_subsystem", 00:08:06.247 "req_id": 1 00:08:06.247 } 00:08:06.247 Got JSON-RPC error response 00:08:06.247 response: 00:08:06.247 { 00:08:06.247 "code": -32602, 00:08:06.247 "message": "Invalid MN SPDK_Controller\u001f" 00:08:06.247 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:06.247 16:04:07 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:06.247 16:04:07 -- target/invalid.sh@19 -- # local length=21 ll 00:08:06.247 16:04:07 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:06.247 16:04:07 -- target/invalid.sh@21 -- # local chars 00:08:06.247 16:04:07 -- target/invalid.sh@22 -- # local string 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 92 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+='\' 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 49 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=1 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 67 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=C 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 43 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=+ 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 102 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=f 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 73 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=I 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # printf %x 72 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:06.247 16:04:07 -- target/invalid.sh@25 -- # string+=H 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.247 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 67 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=C 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 36 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+='$' 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 85 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=U 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 68 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=D 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 100 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=d 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 56 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=8 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 126 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+='~' 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 110 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=n 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 116 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=t 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 65 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=A 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 95 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=_ 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 40 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+='(' 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 78 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=N 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # printf %x 69 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:06.248 16:04:07 -- target/invalid.sh@25 -- # string+=E 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.248 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.248 16:04:07 -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:08:06.248 16:04:07 -- target/invalid.sh@31 -- # echo '\1C+fIHC$UDd8~ntA_(NE' 00:08:06.248 16:04:07 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\1C+fIHC$UDd8~ntA_(NE' nqn.2016-06.io.spdk:cnode13963 00:08:06.506 [2024-04-24 16:04:07.565241] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13963: invalid serial number '\1C+fIHC$UDd8~ntA_(NE' 00:08:06.506 16:04:07 -- target/invalid.sh@54 -- # out='request: 00:08:06.506 { 00:08:06.506 "nqn": "nqn.2016-06.io.spdk:cnode13963", 00:08:06.506 "serial_number": "\\1C+fIHC$UDd8~ntA_(NE", 00:08:06.506 "method": "nvmf_create_subsystem", 00:08:06.507 "req_id": 1 00:08:06.507 } 00:08:06.507 Got JSON-RPC error response 00:08:06.507 response: 00:08:06.507 { 00:08:06.507 "code": -32602, 00:08:06.507 "message": "Invalid SN \\1C+fIHC$UDd8~ntA_(NE" 00:08:06.507 }' 00:08:06.507 16:04:07 -- target/invalid.sh@55 -- # [[ request: 00:08:06.507 { 00:08:06.507 "nqn": "nqn.2016-06.io.spdk:cnode13963", 00:08:06.507 "serial_number": "\\1C+fIHC$UDd8~ntA_(NE", 00:08:06.507 "method": "nvmf_create_subsystem", 00:08:06.507 "req_id": 1 00:08:06.507 } 00:08:06.507 Got JSON-RPC error response 00:08:06.507 response: 00:08:06.507 { 00:08:06.507 "code": -32602, 00:08:06.507 "message": "Invalid SN \\1C+fIHC$UDd8~ntA_(NE" 00:08:06.507 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:06.507 16:04:07 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:06.507 16:04:07 -- target/invalid.sh@19 -- # local length=41 ll 00:08:06.507 16:04:07 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:06.507 16:04:07 -- target/invalid.sh@21 -- # local chars 00:08:06.507 16:04:07 -- target/invalid.sh@22 -- # local string 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 36 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='$' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 73 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=I 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 69 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=E 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 103 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=g 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 47 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=/ 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 63 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='?' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 59 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=';' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 87 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=W 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 96 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='`' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 125 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='}' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 123 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='{' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 103 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=g 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 117 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=u 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 68 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=D 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 44 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=, 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 121 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=y 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 52 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=4 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 47 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=/ 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 59 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=';' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 102 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=f 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 53 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=5 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 99 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=c 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 50 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=2 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 72 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=H 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 124 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+='|' 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.507 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # printf %x 54 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:06.507 16:04:07 -- target/invalid.sh@25 -- # string+=6 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 104 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=h 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 126 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+='~' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 126 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+='~' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 45 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=- 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 63 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+='?' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 83 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=S 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 114 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=r 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 64 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=@ 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 53 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=5 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 35 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+='#' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 42 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+='*' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 53 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=5 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 109 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=m 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 61 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+== 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # printf %x 32 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:06.508 16:04:07 -- target/invalid.sh@25 -- # string+=' ' 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.508 16:04:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.508 16:04:07 -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:08:06.508 16:04:07 -- target/invalid.sh@31 -- # echo '$IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= ' 00:08:06.508 16:04:07 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= ' nqn.2016-06.io.spdk:cnode10682 00:08:06.766 [2024-04-24 16:04:08.006664] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10682: invalid model number '$IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= ' 00:08:06.766 16:04:08 -- target/invalid.sh@58 -- # out='request: 00:08:06.766 { 00:08:06.766 "nqn": "nqn.2016-06.io.spdk:cnode10682", 00:08:06.766 "model_number": "$IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= ", 00:08:06.766 "method": "nvmf_create_subsystem", 00:08:06.766 "req_id": 1 00:08:06.766 } 00:08:06.766 Got JSON-RPC error response 00:08:06.766 response: 00:08:06.766 { 00:08:06.766 "code": -32602, 00:08:06.766 "message": "Invalid MN $IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= " 00:08:06.766 }' 00:08:06.766 16:04:08 -- target/invalid.sh@59 -- # [[ request: 00:08:06.766 { 00:08:06.766 "nqn": "nqn.2016-06.io.spdk:cnode10682", 00:08:06.766 "model_number": "$IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= ", 00:08:06.766 "method": "nvmf_create_subsystem", 00:08:06.766 "req_id": 1 00:08:06.766 } 00:08:06.766 Got JSON-RPC error response 00:08:06.766 response: 00:08:06.766 { 00:08:06.766 "code": -32602, 00:08:06.766 "message": "Invalid MN $IEg/?;W`}{guD,y4/;f5c2H|6h~~-?Sr@5#*5m= " 00:08:06.766 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:06.766 16:04:08 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:07.023 [2024-04-24 16:04:08.251541] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.023 16:04:08 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:07.280 16:04:08 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:07.280 16:04:08 -- target/invalid.sh@67 -- # echo '' 00:08:07.280 16:04:08 -- target/invalid.sh@67 -- # head -n 1 00:08:07.280 16:04:08 -- target/invalid.sh@67 -- # IP= 00:08:07.280 16:04:08 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:07.537 [2024-04-24 16:04:08.733148] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:07.537 16:04:08 -- target/invalid.sh@69 -- # out='request: 00:08:07.537 { 00:08:07.537 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:07.537 "listen_address": { 00:08:07.537 "trtype": "tcp", 00:08:07.537 "traddr": "", 00:08:07.537 "trsvcid": "4421" 00:08:07.537 }, 00:08:07.537 "method": "nvmf_subsystem_remove_listener", 00:08:07.537 "req_id": 1 00:08:07.537 } 00:08:07.537 Got JSON-RPC error response 00:08:07.537 response: 00:08:07.537 { 00:08:07.537 "code": -32602, 00:08:07.537 "message": "Invalid parameters" 00:08:07.537 }' 00:08:07.537 16:04:08 -- target/invalid.sh@70 -- # [[ request: 00:08:07.537 { 00:08:07.537 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:07.537 "listen_address": { 00:08:07.537 "trtype": "tcp", 00:08:07.537 "traddr": "", 00:08:07.537 "trsvcid": "4421" 00:08:07.537 }, 00:08:07.537 "method": "nvmf_subsystem_remove_listener", 00:08:07.537 "req_id": 1 00:08:07.537 } 00:08:07.537 Got JSON-RPC error response 00:08:07.537 response: 00:08:07.537 { 00:08:07.537 "code": -32602, 00:08:07.537 "message": "Invalid parameters" 00:08:07.537 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:07.537 16:04:08 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25954 -i 0 00:08:07.795 [2024-04-24 16:04:08.969874] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25954: invalid cntlid range [0-65519] 00:08:07.795 16:04:08 -- target/invalid.sh@73 -- # out='request: 00:08:07.795 { 00:08:07.795 "nqn": "nqn.2016-06.io.spdk:cnode25954", 00:08:07.795 "min_cntlid": 0, 00:08:07.795 "method": "nvmf_create_subsystem", 00:08:07.795 "req_id": 1 00:08:07.795 } 00:08:07.795 Got JSON-RPC error response 00:08:07.795 response: 00:08:07.795 { 00:08:07.795 "code": -32602, 00:08:07.795 "message": "Invalid cntlid range [0-65519]" 00:08:07.795 }' 00:08:07.795 16:04:08 -- target/invalid.sh@74 -- # [[ request: 00:08:07.795 { 00:08:07.795 "nqn": "nqn.2016-06.io.spdk:cnode25954", 00:08:07.795 "min_cntlid": 0, 00:08:07.795 "method": "nvmf_create_subsystem", 00:08:07.795 "req_id": 1 00:08:07.795 } 00:08:07.795 Got JSON-RPC error response 00:08:07.795 response: 00:08:07.795 { 00:08:07.795 "code": -32602, 00:08:07.795 "message": "Invalid cntlid range [0-65519]" 00:08:07.795 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:07.795 16:04:08 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15702 -i 65520 00:08:08.053 [2024-04-24 16:04:09.218675] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15702: invalid cntlid range [65520-65519] 00:08:08.053 16:04:09 -- target/invalid.sh@75 -- # out='request: 00:08:08.053 { 00:08:08.053 "nqn": "nqn.2016-06.io.spdk:cnode15702", 00:08:08.053 "min_cntlid": 65520, 00:08:08.053 "method": "nvmf_create_subsystem", 00:08:08.053 "req_id": 1 00:08:08.053 } 00:08:08.053 Got JSON-RPC error response 00:08:08.053 response: 00:08:08.053 { 00:08:08.053 "code": -32602, 00:08:08.053 "message": "Invalid cntlid range [65520-65519]" 00:08:08.053 }' 00:08:08.053 16:04:09 -- target/invalid.sh@76 -- # [[ request: 00:08:08.053 { 00:08:08.053 "nqn": "nqn.2016-06.io.spdk:cnode15702", 00:08:08.053 "min_cntlid": 65520, 00:08:08.053 "method": "nvmf_create_subsystem", 00:08:08.053 "req_id": 1 00:08:08.053 } 00:08:08.053 Got JSON-RPC error response 00:08:08.053 response: 00:08:08.053 { 00:08:08.053 "code": -32602, 00:08:08.053 "message": "Invalid cntlid range [65520-65519]" 00:08:08.053 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.053 16:04:09 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17635 -I 0 00:08:08.310 [2024-04-24 16:04:09.463496] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17635: invalid cntlid range [1-0] 00:08:08.310 16:04:09 -- target/invalid.sh@77 -- # out='request: 00:08:08.310 { 00:08:08.310 "nqn": "nqn.2016-06.io.spdk:cnode17635", 00:08:08.310 "max_cntlid": 0, 00:08:08.310 "method": "nvmf_create_subsystem", 00:08:08.310 "req_id": 1 00:08:08.310 } 00:08:08.310 Got JSON-RPC error response 00:08:08.310 response: 00:08:08.310 { 00:08:08.310 "code": -32602, 00:08:08.310 "message": "Invalid cntlid range [1-0]" 00:08:08.310 }' 00:08:08.310 16:04:09 -- target/invalid.sh@78 -- # [[ request: 00:08:08.310 { 00:08:08.311 "nqn": "nqn.2016-06.io.spdk:cnode17635", 00:08:08.311 "max_cntlid": 0, 00:08:08.311 "method": "nvmf_create_subsystem", 00:08:08.311 "req_id": 1 00:08:08.311 } 00:08:08.311 Got JSON-RPC error response 00:08:08.311 response: 00:08:08.311 { 00:08:08.311 "code": -32602, 00:08:08.311 "message": "Invalid cntlid range [1-0]" 00:08:08.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.311 16:04:09 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28965 -I 65520 00:08:08.569 [2024-04-24 16:04:09.716338] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28965: invalid cntlid range [1-65520] 00:08:08.569 16:04:09 -- target/invalid.sh@79 -- # out='request: 00:08:08.569 { 00:08:08.569 "nqn": "nqn.2016-06.io.spdk:cnode28965", 00:08:08.569 "max_cntlid": 65520, 00:08:08.569 "method": "nvmf_create_subsystem", 00:08:08.569 "req_id": 1 00:08:08.569 } 00:08:08.569 Got JSON-RPC error response 00:08:08.569 response: 00:08:08.569 { 00:08:08.569 "code": -32602, 00:08:08.569 "message": "Invalid cntlid range [1-65520]" 00:08:08.569 }' 00:08:08.569 16:04:09 -- target/invalid.sh@80 -- # [[ request: 00:08:08.569 { 00:08:08.569 "nqn": "nqn.2016-06.io.spdk:cnode28965", 00:08:08.569 "max_cntlid": 65520, 00:08:08.569 "method": "nvmf_create_subsystem", 00:08:08.569 "req_id": 1 00:08:08.569 } 00:08:08.569 Got JSON-RPC error response 00:08:08.569 response: 00:08:08.569 { 00:08:08.569 "code": -32602, 00:08:08.569 "message": "Invalid cntlid range [1-65520]" 00:08:08.569 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.569 16:04:09 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31287 -i 6 -I 5 00:08:08.827 [2024-04-24 16:04:09.945099] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31287: invalid cntlid range [6-5] 00:08:08.827 16:04:09 -- target/invalid.sh@83 -- # out='request: 00:08:08.827 { 00:08:08.827 "nqn": "nqn.2016-06.io.spdk:cnode31287", 00:08:08.827 "min_cntlid": 6, 00:08:08.827 "max_cntlid": 5, 00:08:08.827 "method": "nvmf_create_subsystem", 00:08:08.827 "req_id": 1 00:08:08.827 } 00:08:08.827 Got JSON-RPC error response 00:08:08.827 response: 00:08:08.827 { 00:08:08.827 "code": -32602, 00:08:08.827 "message": "Invalid cntlid range [6-5]" 00:08:08.827 }' 00:08:08.827 16:04:09 -- target/invalid.sh@84 -- # [[ request: 00:08:08.827 { 00:08:08.827 "nqn": "nqn.2016-06.io.spdk:cnode31287", 00:08:08.827 "min_cntlid": 6, 00:08:08.827 "max_cntlid": 5, 00:08:08.827 "method": "nvmf_create_subsystem", 00:08:08.827 "req_id": 1 00:08:08.827 } 00:08:08.827 Got JSON-RPC error response 00:08:08.827 response: 00:08:08.827 { 00:08:08.827 "code": -32602, 00:08:08.827 "message": "Invalid cntlid range [6-5]" 00:08:08.827 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.827 16:04:09 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:08.827 16:04:10 -- target/invalid.sh@87 -- # out='request: 00:08:08.827 { 00:08:08.827 "name": "foobar", 00:08:08.827 "method": "nvmf_delete_target", 00:08:08.827 "req_id": 1 00:08:08.827 } 00:08:08.827 Got JSON-RPC error response 00:08:08.827 response: 00:08:08.827 { 00:08:08.827 "code": -32602, 00:08:08.827 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:08.827 }' 00:08:08.827 16:04:10 -- target/invalid.sh@88 -- # [[ request: 00:08:08.827 { 00:08:08.827 "name": "foobar", 00:08:08.827 "method": "nvmf_delete_target", 00:08:08.827 "req_id": 1 00:08:08.827 } 00:08:08.827 Got JSON-RPC error response 00:08:08.827 response: 00:08:08.827 { 00:08:08.827 "code": -32602, 00:08:08.827 "message": "The specified target doesn't exist, cannot delete it." 00:08:08.827 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:08.827 16:04:10 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:08.827 16:04:10 -- target/invalid.sh@91 -- # nvmftestfini 00:08:08.827 16:04:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:08.827 16:04:10 -- nvmf/common.sh@117 -- # sync 00:08:08.827 16:04:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.827 16:04:10 -- nvmf/common.sh@120 -- # set +e 00:08:08.827 16:04:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.827 16:04:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.827 rmmod nvme_tcp 00:08:08.827 rmmod nvme_fabrics 00:08:09.084 rmmod nvme_keyring 00:08:09.084 16:04:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.084 16:04:10 -- nvmf/common.sh@124 -- # set -e 00:08:09.084 16:04:10 -- nvmf/common.sh@125 -- # return 0 00:08:09.085 16:04:10 -- nvmf/common.sh@478 -- # '[' -n 3324306 ']' 00:08:09.085 16:04:10 -- nvmf/common.sh@479 -- # killprocess 3324306 00:08:09.085 16:04:10 -- common/autotest_common.sh@936 -- # '[' -z 3324306 ']' 00:08:09.085 16:04:10 -- common/autotest_common.sh@940 -- # kill -0 3324306 00:08:09.085 16:04:10 -- common/autotest_common.sh@941 -- # uname 00:08:09.085 16:04:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:09.085 16:04:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3324306 00:08:09.085 16:04:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:09.085 16:04:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:09.085 16:04:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3324306' 00:08:09.085 killing process with pid 3324306 00:08:09.085 16:04:10 -- common/autotest_common.sh@955 -- # kill 3324306 00:08:09.085 16:04:10 -- common/autotest_common.sh@960 -- # wait 3324306 00:08:09.344 16:04:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:09.344 16:04:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:09.344 16:04:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:09.344 16:04:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.344 16:04:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.344 16:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.344 16:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.344 16:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.245 16:04:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:11.245 00:08:11.245 real 0m8.479s 00:08:11.245 user 0m19.823s 00:08:11.245 sys 0m2.328s 00:08:11.245 16:04:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.245 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.245 ************************************ 00:08:11.245 END TEST nvmf_invalid 00:08:11.245 ************************************ 00:08:11.245 16:04:12 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:11.245 16:04:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.245 16:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.245 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.503 ************************************ 00:08:11.503 START TEST nvmf_abort 00:08:11.503 ************************************ 00:08:11.503 16:04:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:11.504 * Looking for test storage... 00:08:11.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.504 16:04:12 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.504 16:04:12 -- nvmf/common.sh@7 -- # uname -s 00:08:11.504 16:04:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.504 16:04:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.504 16:04:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.504 16:04:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.504 16:04:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.504 16:04:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.504 16:04:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.504 16:04:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.504 16:04:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.504 16:04:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.504 16:04:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:11.504 16:04:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:11.504 16:04:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.504 16:04:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.504 16:04:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.504 16:04:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.504 16:04:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.504 16:04:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.504 16:04:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.504 16:04:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.504 16:04:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.504 16:04:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.504 16:04:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.504 16:04:12 -- paths/export.sh@5 -- # export PATH 00:08:11.504 16:04:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.504 16:04:12 -- nvmf/common.sh@47 -- # : 0 00:08:11.504 16:04:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.504 16:04:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.504 16:04:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.504 16:04:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.504 16:04:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.504 16:04:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.504 16:04:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.504 16:04:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.504 16:04:12 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.504 16:04:12 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:11.504 16:04:12 -- target/abort.sh@14 -- # nvmftestinit 00:08:11.504 16:04:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:11.504 16:04:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.504 16:04:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:11.504 16:04:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:11.504 16:04:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:11.504 16:04:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.504 16:04:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.504 16:04:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.504 16:04:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:11.504 16:04:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:11.504 16:04:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.504 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.041 16:04:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.041 16:04:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.041 16:04:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.041 16:04:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.041 16:04:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.041 16:04:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.041 16:04:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.041 16:04:14 -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.041 16:04:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.041 16:04:14 -- nvmf/common.sh@296 -- # e810=() 00:08:14.041 16:04:14 -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.042 16:04:14 -- nvmf/common.sh@297 -- # x722=() 00:08:14.042 16:04:14 -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.042 16:04:14 -- nvmf/common.sh@298 -- # mlx=() 00:08:14.042 16:04:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.042 16:04:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.042 16:04:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.042 16:04:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:14.042 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:14.042 16:04:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.042 16:04:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:14.042 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:14.042 16:04:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.042 16:04:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.042 16:04:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.042 16:04:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:14.042 Found net devices under 0000:09:00.0: cvl_0_0 00:08:14.042 16:04:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.042 16:04:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.042 16:04:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.042 16:04:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:14.042 Found net devices under 0000:09:00.1: cvl_0_1 00:08:14.042 16:04:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:14.042 16:04:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:14.042 16:04:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.042 16:04:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.042 16:04:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.042 16:04:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.042 16:04:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.042 16:04:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.042 16:04:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.042 16:04:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.042 16:04:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.042 16:04:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.042 16:04:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.042 16:04:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.042 16:04:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.042 16:04:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.042 16:04:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.042 16:04:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.042 16:04:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.042 16:04:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.042 16:04:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:08:14.042 00:08:14.042 --- 10.0.0.2 ping statistics --- 00:08:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.042 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:14.042 16:04:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:14.042 00:08:14.042 --- 10.0.0.1 ping statistics --- 00:08:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.042 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:14.042 16:04:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.042 16:04:14 -- nvmf/common.sh@411 -- # return 0 00:08:14.042 16:04:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:14.042 16:04:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.042 16:04:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:14.042 16:04:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.042 16:04:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:14.042 16:04:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:14.042 16:04:14 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:14.042 16:04:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:14.042 16:04:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:14.042 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.042 16:04:14 -- nvmf/common.sh@470 -- # nvmfpid=3326951 00:08:14.042 16:04:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.042 16:04:14 -- nvmf/common.sh@471 -- # waitforlisten 3326951 00:08:14.042 16:04:14 -- common/autotest_common.sh@817 -- # '[' -z 3326951 ']' 00:08:14.042 16:04:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.042 16:04:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:14.042 16:04:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.042 16:04:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:14.042 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.042 [2024-04-24 16:04:14.946646] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:08:14.042 [2024-04-24 16:04:14.946739] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.042 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.042 [2024-04-24 16:04:15.011966] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.042 [2024-04-24 16:04:15.115513] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.042 [2024-04-24 16:04:15.115578] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.042 [2024-04-24 16:04:15.115610] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.042 [2024-04-24 16:04:15.115622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.043 [2024-04-24 16:04:15.115632] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.043 [2024-04-24 16:04:15.115793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.043 [2024-04-24 16:04:15.115890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.043 [2024-04-24 16:04:15.115899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.043 16:04:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:14.043 16:04:15 -- common/autotest_common.sh@850 -- # return 0 00:08:14.043 16:04:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:14.043 16:04:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 16:04:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.043 16:04:15 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:14.043 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 [2024-04-24 16:04:15.261895] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.043 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.043 16:04:15 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:14.043 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 Malloc0 00:08:14.043 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.043 16:04:15 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:14.043 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 Delay0 00:08:14.043 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.043 16:04:15 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:14.043 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.043 16:04:15 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:14.043 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.043 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.301 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.301 16:04:15 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:14.301 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.301 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.301 [2024-04-24 16:04:15.337828] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.301 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.301 16:04:15 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.301 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.301 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.301 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.301 16:04:15 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:14.301 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.301 [2024-04-24 16:04:15.402940] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:16.828 Initializing NVMe Controllers 00:08:16.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:16.828 controller IO queue size 128 less than required 00:08:16.828 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:16.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:16.828 Initialization complete. Launching workers. 00:08:16.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29334 00:08:16.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29395, failed to submit 62 00:08:16.828 success 29338, unsuccess 57, failed 0 00:08:16.828 16:04:17 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.828 16:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.828 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:16.828 16:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.828 16:04:17 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:16.828 16:04:17 -- target/abort.sh@38 -- # nvmftestfini 00:08:16.828 16:04:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:16.828 16:04:17 -- nvmf/common.sh@117 -- # sync 00:08:16.828 16:04:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.828 16:04:17 -- nvmf/common.sh@120 -- # set +e 00:08:16.828 16:04:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.828 16:04:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.828 rmmod nvme_tcp 00:08:16.828 rmmod nvme_fabrics 00:08:16.828 rmmod nvme_keyring 00:08:16.828 16:04:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.828 16:04:17 -- nvmf/common.sh@124 -- # set -e 00:08:16.828 16:04:17 -- nvmf/common.sh@125 -- # return 0 00:08:16.828 16:04:17 -- nvmf/common.sh@478 -- # '[' -n 3326951 ']' 00:08:16.828 16:04:17 -- nvmf/common.sh@479 -- # killprocess 3326951 00:08:16.828 16:04:17 -- common/autotest_common.sh@936 -- # '[' -z 3326951 ']' 00:08:16.828 16:04:17 -- common/autotest_common.sh@940 -- # kill -0 3326951 00:08:16.828 16:04:17 -- common/autotest_common.sh@941 -- # uname 00:08:16.828 16:04:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:16.828 16:04:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3326951 00:08:16.828 16:04:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:16.828 16:04:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:16.828 16:04:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3326951' 00:08:16.828 killing process with pid 3326951 00:08:16.828 16:04:17 -- common/autotest_common.sh@955 -- # kill 3326951 00:08:16.828 16:04:17 -- common/autotest_common.sh@960 -- # wait 3326951 00:08:16.828 16:04:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:16.828 16:04:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:16.828 16:04:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:16.828 16:04:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.828 16:04:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.829 16:04:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.829 16:04:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.829 16:04:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.733 16:04:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.733 00:08:18.733 real 0m7.361s 00:08:18.733 user 0m10.466s 00:08:18.733 sys 0m2.559s 00:08:18.733 16:04:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:18.733 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 ************************************ 00:08:18.733 END TEST nvmf_abort 00:08:18.733 ************************************ 00:08:18.733 16:04:19 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:18.733 16:04:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.733 16:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.733 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:18.991 ************************************ 00:08:18.991 START TEST nvmf_ns_hotplug_stress 00:08:18.991 ************************************ 00:08:18.991 16:04:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:18.991 * Looking for test storage... 00:08:18.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.991 16:04:20 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.991 16:04:20 -- nvmf/common.sh@7 -- # uname -s 00:08:18.991 16:04:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.991 16:04:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.991 16:04:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.991 16:04:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.991 16:04:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.991 16:04:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.991 16:04:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.991 16:04:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.991 16:04:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.991 16:04:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.991 16:04:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:18.991 16:04:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:18.991 16:04:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.991 16:04:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.991 16:04:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.991 16:04:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.991 16:04:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.991 16:04:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.991 16:04:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.991 16:04:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.991 16:04:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.991 16:04:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.991 16:04:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.991 16:04:20 -- paths/export.sh@5 -- # export PATH 00:08:18.991 16:04:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.991 16:04:20 -- nvmf/common.sh@47 -- # : 0 00:08:18.991 16:04:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.991 16:04:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.991 16:04:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.991 16:04:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.991 16:04:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.991 16:04:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.991 16:04:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.991 16:04:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.991 16:04:20 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.991 16:04:20 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:18.991 16:04:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:18.991 16:04:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.991 16:04:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:18.991 16:04:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:18.991 16:04:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:18.991 16:04:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.991 16:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.991 16:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.991 16:04:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:18.991 16:04:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:18.991 16:04:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.991 16:04:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.520 16:04:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:21.520 16:04:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.520 16:04:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.520 16:04:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.520 16:04:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.520 16:04:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.520 16:04:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.520 16:04:22 -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.521 16:04:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.521 16:04:22 -- nvmf/common.sh@296 -- # e810=() 00:08:21.521 16:04:22 -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.521 16:04:22 -- nvmf/common.sh@297 -- # x722=() 00:08:21.521 16:04:22 -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.521 16:04:22 -- nvmf/common.sh@298 -- # mlx=() 00:08:21.521 16:04:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.521 16:04:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.521 16:04:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.521 16:04:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:21.521 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:21.521 16:04:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.521 16:04:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:21.521 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:21.521 16:04:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.521 16:04:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.521 16:04:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.521 16:04:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:21.521 Found net devices under 0000:09:00.0: cvl_0_0 00:08:21.521 16:04:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.521 16:04:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.521 16:04:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.521 16:04:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:21.521 Found net devices under 0000:09:00.1: cvl_0_1 00:08:21.521 16:04:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:21.521 16:04:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:21.521 16:04:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.521 16:04:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.521 16:04:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.521 16:04:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.521 16:04:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.521 16:04:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.521 16:04:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.521 16:04:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.521 16:04:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.521 16:04:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.521 16:04:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.521 16:04:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.521 16:04:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.521 16:04:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.521 16:04:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.521 16:04:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.521 16:04:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.521 16:04:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.521 16:04:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:21.521 00:08:21.521 --- 10.0.0.2 ping statistics --- 00:08:21.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.521 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:21.521 16:04:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:21.521 00:08:21.521 --- 10.0.0.1 ping statistics --- 00:08:21.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.521 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:21.521 16:04:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.521 16:04:22 -- nvmf/common.sh@411 -- # return 0 00:08:21.521 16:04:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:21.521 16:04:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.521 16:04:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:21.521 16:04:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.521 16:04:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:21.521 16:04:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:21.521 16:04:22 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:08:21.521 16:04:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:21.521 16:04:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:21.521 16:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 16:04:22 -- nvmf/common.sh@470 -- # nvmfpid=3329181 00:08:21.521 16:04:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:21.521 16:04:22 -- nvmf/common.sh@471 -- # waitforlisten 3329181 00:08:21.521 16:04:22 -- common/autotest_common.sh@817 -- # '[' -z 3329181 ']' 00:08:21.521 16:04:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.521 16:04:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.521 16:04:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.521 16:04:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.521 16:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 [2024-04-24 16:04:22.387156] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:08:21.521 [2024-04-24 16:04:22.387226] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.521 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.521 [2024-04-24 16:04:22.461162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.521 [2024-04-24 16:04:22.572494] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.521 [2024-04-24 16:04:22.572551] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.521 [2024-04-24 16:04:22.572580] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.521 [2024-04-24 16:04:22.572592] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.521 [2024-04-24 16:04:22.572603] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.521 [2024-04-24 16:04:22.572735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.521 [2024-04-24 16:04:22.572799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.521 [2024-04-24 16:04:22.572804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.521 16:04:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:21.521 16:04:22 -- common/autotest_common.sh@850 -- # return 0 00:08:21.521 16:04:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:21.521 16:04:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:21.521 16:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 16:04:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.521 16:04:22 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:08:21.521 16:04:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.799 [2024-04-24 16:04:22.971547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.799 16:04:22 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.063 16:04:23 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.320 [2024-04-24 16:04:23.506323] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.320 16:04:23 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.578 16:04:23 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:22.836 Malloc0 00:08:22.836 16:04:24 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:23.094 Delay0 00:08:23.094 16:04:24 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.351 16:04:24 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:23.608 NULL1 00:08:23.608 16:04:24 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:23.865 16:04:25 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3329597 00:08:23.865 16:04:25 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:23.866 16:04:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:23.866 16:04:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.866 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.123 16:04:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.380 16:04:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:08:24.380 16:04:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:24.637 true 00:08:24.637 16:04:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:24.637 16:04:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.894 16:04:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.160 16:04:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:08:25.160 16:04:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:25.160 true 00:08:25.419 16:04:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:25.419 16:04:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.351 Read completed with error (sct=0, sc=11) 00:08:26.351 16:04:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.351 16:04:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:08:26.351 16:04:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:26.608 true 00:08:26.608 16:04:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:26.608 16:04:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.865 16:04:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.122 16:04:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:08:27.122 16:04:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:27.379 true 00:08:27.379 16:04:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:27.379 16:04:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.310 16:04:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.567 16:04:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:08:28.567 16:04:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:28.825 true 00:08:28.825 16:04:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:28.825 16:04:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.082 16:04:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.339 16:04:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:08:29.339 16:04:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:29.597 true 00:08:29.597 16:04:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:29.597 16:04:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.855 16:04:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.112 16:04:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:08:30.112 16:04:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:30.370 true 00:08:30.370 16:04:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:30.370 16:04:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.301 16:04:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.559 16:04:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:08:31.559 16:04:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:31.815 true 00:08:31.815 16:04:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:31.815 16:04:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.745 16:04:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.002 16:04:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:33.002 16:04:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:33.259 true 00:08:33.259 16:04:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:33.259 16:04:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.514 16:04:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.771 16:04:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:33.771 16:04:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:34.028 true 00:08:34.028 16:04:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:34.028 16:04:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.960 16:04:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.960 16:04:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:34.960 16:04:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:35.247 true 00:08:35.247 16:04:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:35.247 16:04:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.529 16:04:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.787 16:04:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:35.787 16:04:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:36.045 true 00:08:36.045 16:04:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:36.045 16:04:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.979 16:04:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.236 16:04:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:37.236 16:04:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:37.494 true 00:08:37.494 16:04:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:37.494 16:04:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.751 16:04:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.008 16:04:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:38.008 16:04:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:38.266 true 00:08:38.266 16:04:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:38.266 16:04:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.523 16:04:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.781 16:04:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:38.781 16:04:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:38.781 true 00:08:39.039 16:04:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:39.039 16:04:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.969 16:04:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.226 16:04:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:40.226 16:04:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:40.483 true 00:08:40.483 16:04:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:40.483 16:04:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.741 16:04:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.999 16:04:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:40.999 16:04:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:41.256 true 00:08:41.256 16:04:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:41.256 16:04:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.189 16:04:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.447 16:04:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:42.447 16:04:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:42.704 true 00:08:42.704 16:04:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:42.704 16:04:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.961 16:04:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.218 16:04:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:43.218 16:04:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:43.475 true 00:08:43.475 16:04:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:43.475 16:04:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.407 16:04:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.664 16:04:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:44.664 16:04:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:44.921 true 00:08:44.921 16:04:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:44.921 16:04:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.186 16:04:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.443 16:04:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:45.443 16:04:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:45.700 true 00:08:45.700 16:04:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:45.700 16:04:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.958 16:04:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.215 16:04:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:46.215 16:04:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:46.215 true 00:08:46.215 16:04:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:46.215 16:04:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.587 16:04:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.587 16:04:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:47.587 16:04:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:47.845 true 00:08:47.845 16:04:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:47.845 16:04:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.776 16:04:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.776 16:04:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:48.776 16:04:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:49.033 true 00:08:49.033 16:04:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:49.033 16:04:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.290 16:04:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.549 16:04:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:49.549 16:04:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:49.807 true 00:08:49.807 16:04:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:49.807 16:04:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.739 16:04:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.996 16:04:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:50.996 16:04:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:51.253 true 00:08:51.253 16:04:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:51.253 16:04:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.511 16:04:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.768 16:04:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:51.768 16:04:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:52.025 true 00:08:52.025 16:04:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:52.025 16:04:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.958 16:04:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.215 16:04:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:53.215 16:04:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:53.473 true 00:08:53.473 16:04:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:53.473 16:04:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.730 16:04:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.987 16:04:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:08:53.987 16:04:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:54.244 true 00:08:54.244 16:04:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:54.244 16:04:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.176 16:04:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.176 Initializing NVMe Controllers 00:08:55.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.176 Controller IO queue size 128, less than required. 00:08:55.176 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:55.176 Controller IO queue size 128, less than required. 00:08:55.176 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:55.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:55.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:55.176 Initialization complete. Launching workers. 00:08:55.176 ======================================================== 00:08:55.177 Latency(us) 00:08:55.177 Device Information : IOPS MiB/s Average min max 00:08:55.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1017.83 0.50 66501.73 2584.15 1012706.40 00:08:55.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10210.50 4.99 12498.51 3051.37 366739.72 00:08:55.177 ======================================================== 00:08:55.177 Total : 11228.33 5.48 17393.83 2584.15 1012706.40 00:08:55.177 00:08:55.177 16:04:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:08:55.177 16:04:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:55.434 true 00:08:55.434 16:04:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3329597 00:08:55.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3329597) - No such process 00:08:55.434 16:04:56 -- target/ns_hotplug_stress.sh@44 -- # wait 3329597 00:08:55.434 16:04:56 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:55.434 16:04:56 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:55.434 16:04:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:55.434 16:04:56 -- nvmf/common.sh@117 -- # sync 00:08:55.434 16:04:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.434 16:04:56 -- nvmf/common.sh@120 -- # set +e 00:08:55.434 16:04:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.434 16:04:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.434 rmmod nvme_tcp 00:08:55.434 rmmod nvme_fabrics 00:08:55.434 rmmod nvme_keyring 00:08:55.693 16:04:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.693 16:04:56 -- nvmf/common.sh@124 -- # set -e 00:08:55.693 16:04:56 -- nvmf/common.sh@125 -- # return 0 00:08:55.693 16:04:56 -- nvmf/common.sh@478 -- # '[' -n 3329181 ']' 00:08:55.693 16:04:56 -- nvmf/common.sh@479 -- # killprocess 3329181 00:08:55.693 16:04:56 -- common/autotest_common.sh@936 -- # '[' -z 3329181 ']' 00:08:55.693 16:04:56 -- common/autotest_common.sh@940 -- # kill -0 3329181 00:08:55.693 16:04:56 -- common/autotest_common.sh@941 -- # uname 00:08:55.693 16:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.693 16:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3329181 00:08:55.693 16:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:55.693 16:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:55.693 16:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3329181' 00:08:55.693 killing process with pid 3329181 00:08:55.693 16:04:56 -- common/autotest_common.sh@955 -- # kill 3329181 00:08:55.693 16:04:56 -- common/autotest_common.sh@960 -- # wait 3329181 00:08:55.952 16:04:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:55.952 16:04:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:55.952 16:04:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:55.952 16:04:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.952 16:04:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.952 16:04:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.952 16:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.952 16:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.854 16:04:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.854 00:08:57.854 real 0m39.008s 00:08:57.854 user 2m31.771s 00:08:57.854 sys 0m10.388s 00:08:57.854 16:04:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:57.854 16:04:59 -- common/autotest_common.sh@10 -- # set +x 00:08:57.854 ************************************ 00:08:57.854 END TEST nvmf_ns_hotplug_stress 00:08:57.854 ************************************ 00:08:57.854 16:04:59 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:57.854 16:04:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:57.854 16:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.854 16:04:59 -- common/autotest_common.sh@10 -- # set +x 00:08:58.113 ************************************ 00:08:58.113 START TEST nvmf_connect_stress 00:08:58.113 ************************************ 00:08:58.113 16:04:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:58.113 * Looking for test storage... 00:08:58.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.113 16:04:59 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.113 16:04:59 -- nvmf/common.sh@7 -- # uname -s 00:08:58.113 16:04:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.113 16:04:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.113 16:04:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.114 16:04:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.114 16:04:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.114 16:04:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.114 16:04:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.114 16:04:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.114 16:04:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.114 16:04:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.114 16:04:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:58.114 16:04:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:58.114 16:04:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.114 16:04:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.114 16:04:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.114 16:04:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.114 16:04:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.114 16:04:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.114 16:04:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.114 16:04:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.114 16:04:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.114 16:04:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.114 16:04:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.114 16:04:59 -- paths/export.sh@5 -- # export PATH 00:08:58.114 16:04:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.114 16:04:59 -- nvmf/common.sh@47 -- # : 0 00:08:58.114 16:04:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.114 16:04:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.114 16:04:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.114 16:04:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.114 16:04:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.114 16:04:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.114 16:04:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.114 16:04:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.114 16:04:59 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:58.114 16:04:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:58.114 16:04:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.114 16:04:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:58.114 16:04:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:58.114 16:04:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:58.114 16:04:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.114 16:04:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.114 16:04:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.114 16:04:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:58.114 16:04:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:58.114 16:04:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.114 16:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:00.647 16:05:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:00.647 16:05:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.647 16:05:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.647 16:05:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.647 16:05:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.647 16:05:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.647 16:05:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.647 16:05:01 -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.647 16:05:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.647 16:05:01 -- nvmf/common.sh@296 -- # e810=() 00:09:00.647 16:05:01 -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.647 16:05:01 -- nvmf/common.sh@297 -- # x722=() 00:09:00.647 16:05:01 -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.647 16:05:01 -- nvmf/common.sh@298 -- # mlx=() 00:09:00.647 16:05:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.647 16:05:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.647 16:05:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.647 16:05:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.647 16:05:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.647 16:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:00.647 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:00.647 16:05:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.647 16:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:00.647 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:00.647 16:05:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.647 16:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.647 16:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.647 16:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:00.647 Found net devices under 0000:09:00.0: cvl_0_0 00:09:00.647 16:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.647 16:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.647 16:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.647 16:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.647 16:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:00.647 Found net devices under 0000:09:00.1: cvl_0_1 00:09:00.647 16:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.647 16:05:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:00.647 16:05:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:00.647 16:05:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:00.647 16:05:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.647 16:05:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.647 16:05:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.647 16:05:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.647 16:05:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.647 16:05:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.647 16:05:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.647 16:05:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.647 16:05:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.647 16:05:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.647 16:05:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.648 16:05:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.648 16:05:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.648 16:05:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.648 16:05:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.648 16:05:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.648 16:05:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.648 16:05:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.648 16:05:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.648 16:05:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:09:00.648 00:09:00.648 --- 10.0.0.2 ping statistics --- 00:09:00.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.648 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:00.648 16:05:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:00.648 00:09:00.648 --- 10.0.0.1 ping statistics --- 00:09:00.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.648 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:00.648 16:05:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.648 16:05:01 -- nvmf/common.sh@411 -- # return 0 00:09:00.648 16:05:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:00.648 16:05:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.648 16:05:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:00.648 16:05:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:00.648 16:05:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.648 16:05:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:00.648 16:05:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:00.648 16:05:01 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:00.648 16:05:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:00.648 16:05:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:00.648 16:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:00.648 16:05:01 -- nvmf/common.sh@470 -- # nvmfpid=3335330 00:09:00.648 16:05:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:00.648 16:05:01 -- nvmf/common.sh@471 -- # waitforlisten 3335330 00:09:00.648 16:05:01 -- common/autotest_common.sh@817 -- # '[' -z 3335330 ']' 00:09:00.648 16:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.648 16:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:00.648 16:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.648 16:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:00.648 16:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:00.648 [2024-04-24 16:05:01.515657] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:00.648 [2024-04-24 16:05:01.515735] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.648 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.648 [2024-04-24 16:05:01.584561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.648 [2024-04-24 16:05:01.695918] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.648 [2024-04-24 16:05:01.695986] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.648 [2024-04-24 16:05:01.696002] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.648 [2024-04-24 16:05:01.696016] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.648 [2024-04-24 16:05:01.696036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.648 [2024-04-24 16:05:01.696127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.648 [2024-04-24 16:05:01.696239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.648 [2024-04-24 16:05:01.696257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.211 16:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:01.211 16:05:02 -- common/autotest_common.sh@850 -- # return 0 00:09:01.211 16:05:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:01.211 16:05:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:01.211 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 16:05:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.469 16:05:02 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.469 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.469 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 [2024-04-24 16:05:02.507939] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.469 16:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.469 16:05:02 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.469 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.469 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 16:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.469 16:05:02 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.469 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.469 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 [2024-04-24 16:05:02.538892] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.469 16:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.469 16:05:02 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:01.469 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.469 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 NULL1 00:09:01.469 16:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.469 16:05:02 -- target/connect_stress.sh@21 -- # PERF_PID=3335486 00:09:01.469 16:05:02 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:01.469 16:05:02 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:01.469 16:05:02 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:01.469 16:05:02 -- target/connect_stress.sh@28 -- # cat 00:09:01.469 16:05:02 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:01.469 16:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.469 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.469 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.726 16:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.726 16:05:02 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:01.726 16:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.726 16:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.726 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.983 16:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.983 16:05:03 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:01.983 16:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.983 16:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.983 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:09:02.548 16:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.548 16:05:03 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:02.548 16:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.548 16:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.548 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:09:02.805 16:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.805 16:05:03 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:02.805 16:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.805 16:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.805 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 16:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.063 16:05:04 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:03.063 16:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.063 16:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.063 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:09:03.320 16:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.320 16:05:04 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:03.320 16:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.320 16:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.320 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:09:03.606 16:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.606 16:05:04 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:03.606 16:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.606 16:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.606 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:09:03.882 16:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.883 16:05:05 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:03.883 16:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.883 16:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.883 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:04.447 16:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.447 16:05:05 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:04.447 16:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.447 16:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.447 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:04.704 16:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.704 16:05:05 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:04.704 16:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.704 16:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.704 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:04.962 16:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.962 16:05:06 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:04.962 16:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.962 16:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.962 16:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:05.220 16:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.220 16:05:06 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:05.220 16:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.220 16:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.220 16:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:05.784 16:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.785 16:05:06 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:05.785 16:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.785 16:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.785 16:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:06.042 16:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.042 16:05:07 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:06.042 16:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.042 16:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.042 16:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:06.300 16:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.300 16:05:07 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:06.300 16:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.300 16:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.300 16:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:06.557 16:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.557 16:05:07 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:06.557 16:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.557 16:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.557 16:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:06.814 16:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.814 16:05:08 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:06.814 16:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.814 16:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.814 16:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:07.378 16:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.378 16:05:08 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:07.378 16:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.378 16:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.378 16:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:07.635 16:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.635 16:05:08 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:07.635 16:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.635 16:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.635 16:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:07.892 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.893 16:05:09 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:07.893 16:05:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.893 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.893 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.150 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.150 16:05:09 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:08.150 16:05:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.150 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.150 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.407 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.407 16:05:09 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:08.407 16:05:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.407 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.407 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.975 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.975 16:05:09 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:08.975 16:05:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.975 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.975 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:09.232 16:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.232 16:05:10 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:09.232 16:05:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.232 16:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.232 16:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:09.489 16:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.489 16:05:10 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:09.489 16:05:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.489 16:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.489 16:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:09.747 16:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.747 16:05:10 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:09.747 16:05:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.747 16:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.747 16:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:10.005 16:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.005 16:05:11 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:10.005 16:05:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.005 16:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.005 16:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:10.569 16:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.569 16:05:11 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:10.569 16:05:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.570 16:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.570 16:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:10.827 16:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.827 16:05:11 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:10.827 16:05:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.827 16:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.827 16:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 16:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.085 16:05:12 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:11.085 16:05:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.085 16:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.085 16:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 16:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.343 16:05:12 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:11.343 16:05:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.343 16:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.343 16:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.601 16:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.601 16:05:12 -- target/connect_stress.sh@34 -- # kill -0 3335486 00:09:11.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3335486) - No such process 00:09:11.601 16:05:12 -- target/connect_stress.sh@38 -- # wait 3335486 00:09:11.601 16:05:12 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:11.601 16:05:12 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:11.601 16:05:12 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:11.601 16:05:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:11.601 16:05:12 -- nvmf/common.sh@117 -- # sync 00:09:11.601 16:05:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.601 16:05:12 -- nvmf/common.sh@120 -- # set +e 00:09:11.601 16:05:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.601 16:05:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.601 rmmod nvme_tcp 00:09:11.858 rmmod nvme_fabrics 00:09:11.858 rmmod nvme_keyring 00:09:11.858 16:05:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.858 16:05:12 -- nvmf/common.sh@124 -- # set -e 00:09:11.858 16:05:12 -- nvmf/common.sh@125 -- # return 0 00:09:11.858 16:05:12 -- nvmf/common.sh@478 -- # '[' -n 3335330 ']' 00:09:11.858 16:05:12 -- nvmf/common.sh@479 -- # killprocess 3335330 00:09:11.858 16:05:12 -- common/autotest_common.sh@936 -- # '[' -z 3335330 ']' 00:09:11.858 16:05:12 -- common/autotest_common.sh@940 -- # kill -0 3335330 00:09:11.858 16:05:12 -- common/autotest_common.sh@941 -- # uname 00:09:11.858 16:05:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.858 16:05:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3335330 00:09:11.858 16:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:11.858 16:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:11.858 16:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3335330' 00:09:11.858 killing process with pid 3335330 00:09:11.858 16:05:12 -- common/autotest_common.sh@955 -- # kill 3335330 00:09:11.858 16:05:12 -- common/autotest_common.sh@960 -- # wait 3335330 00:09:12.117 16:05:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:12.117 16:05:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:12.117 16:05:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:12.117 16:05:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.117 16:05:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.117 16:05:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.117 16:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.117 16:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.019 16:05:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.019 00:09:14.019 real 0m16.027s 00:09:14.019 user 0m40.157s 00:09:14.019 sys 0m6.142s 00:09:14.019 16:05:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.019 16:05:15 -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 ************************************ 00:09:14.019 END TEST nvmf_connect_stress 00:09:14.019 ************************************ 00:09:14.019 16:05:15 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:14.019 16:05:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.019 16:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.019 16:05:15 -- common/autotest_common.sh@10 -- # set +x 00:09:14.278 ************************************ 00:09:14.278 START TEST nvmf_fused_ordering 00:09:14.278 ************************************ 00:09:14.278 16:05:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:14.278 * Looking for test storage... 00:09:14.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.278 16:05:15 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.278 16:05:15 -- nvmf/common.sh@7 -- # uname -s 00:09:14.278 16:05:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.278 16:05:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.278 16:05:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.278 16:05:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.278 16:05:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.278 16:05:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.278 16:05:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.278 16:05:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.278 16:05:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.278 16:05:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.278 16:05:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.278 16:05:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.278 16:05:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.278 16:05:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.278 16:05:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.278 16:05:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.278 16:05:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.278 16:05:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.278 16:05:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.278 16:05:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.278 16:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.278 16:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.278 16:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.278 16:05:15 -- paths/export.sh@5 -- # export PATH 00:09:14.278 16:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.278 16:05:15 -- nvmf/common.sh@47 -- # : 0 00:09:14.278 16:05:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.278 16:05:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.278 16:05:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.278 16:05:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.278 16:05:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.278 16:05:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.278 16:05:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.278 16:05:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.278 16:05:15 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:14.278 16:05:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:14.278 16:05:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.278 16:05:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:14.278 16:05:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:14.278 16:05:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:14.278 16:05:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.278 16:05:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.278 16:05:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.278 16:05:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:14.278 16:05:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:14.278 16:05:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.278 16:05:15 -- common/autotest_common.sh@10 -- # set +x 00:09:16.179 16:05:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:16.179 16:05:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:16.179 16:05:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:16.179 16:05:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:16.179 16:05:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:16.179 16:05:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:16.179 16:05:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:16.179 16:05:17 -- nvmf/common.sh@295 -- # net_devs=() 00:09:16.179 16:05:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:16.179 16:05:17 -- nvmf/common.sh@296 -- # e810=() 00:09:16.179 16:05:17 -- nvmf/common.sh@296 -- # local -ga e810 00:09:16.179 16:05:17 -- nvmf/common.sh@297 -- # x722=() 00:09:16.179 16:05:17 -- nvmf/common.sh@297 -- # local -ga x722 00:09:16.179 16:05:17 -- nvmf/common.sh@298 -- # mlx=() 00:09:16.179 16:05:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:16.179 16:05:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.179 16:05:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:16.179 16:05:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:16.179 16:05:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:16.179 16:05:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.179 16:05:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:16.179 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:16.179 16:05:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.179 16:05:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:16.179 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:16.179 16:05:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:16.179 16:05:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:16.179 16:05:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.179 16:05:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.179 16:05:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:16.179 16:05:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.179 16:05:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:16.179 Found net devices under 0000:09:00.0: cvl_0_0 00:09:16.180 16:05:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.180 16:05:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.180 16:05:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.180 16:05:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:16.180 16:05:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.180 16:05:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:16.180 Found net devices under 0000:09:00.1: cvl_0_1 00:09:16.180 16:05:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.180 16:05:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:16.180 16:05:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:16.180 16:05:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:16.180 16:05:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:16.180 16:05:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:16.180 16:05:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.180 16:05:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.180 16:05:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.180 16:05:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:16.180 16:05:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.180 16:05:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.180 16:05:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:16.180 16:05:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.180 16:05:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.180 16:05:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.180 16:05:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.180 16:05:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.180 16:05:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.180 16:05:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.180 16:05:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.180 16:05:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.438 16:05:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.438 16:05:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.438 16:05:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.438 16:05:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:16.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:16.438 00:09:16.438 --- 10.0.0.2 ping statistics --- 00:09:16.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.438 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:16.438 16:05:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:09:16.438 00:09:16.438 --- 10.0.0.1 ping statistics --- 00:09:16.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.438 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:16.438 16:05:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.438 16:05:17 -- nvmf/common.sh@411 -- # return 0 00:09:16.438 16:05:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:16.438 16:05:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.438 16:05:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:16.438 16:05:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:16.438 16:05:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.438 16:05:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:16.438 16:05:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:16.438 16:05:17 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:16.438 16:05:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:16.438 16:05:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:16.438 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.438 16:05:17 -- nvmf/common.sh@470 -- # nvmfpid=3338649 00:09:16.438 16:05:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:16.438 16:05:17 -- nvmf/common.sh@471 -- # waitforlisten 3338649 00:09:16.438 16:05:17 -- common/autotest_common.sh@817 -- # '[' -z 3338649 ']' 00:09:16.438 16:05:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.438 16:05:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:16.438 16:05:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.438 16:05:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:16.438 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.438 [2024-04-24 16:05:17.591270] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:16.438 [2024-04-24 16:05:17.591366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.438 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.438 [2024-04-24 16:05:17.654947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.696 [2024-04-24 16:05:17.760148] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.696 [2024-04-24 16:05:17.760201] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.696 [2024-04-24 16:05:17.760232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.696 [2024-04-24 16:05:17.760244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.696 [2024-04-24 16:05:17.760254] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.696 [2024-04-24 16:05:17.760300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.696 16:05:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:16.696 16:05:17 -- common/autotest_common.sh@850 -- # return 0 00:09:16.696 16:05:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:16.696 16:05:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 16:05:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.696 16:05:17 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 [2024-04-24 16:05:17.903547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 [2024-04-24 16:05:17.919770] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 NULL1 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:16.696 16:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.696 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 16:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.696 16:05:17 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:16.697 [2024-04-24 16:05:17.964230] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:16.697 [2024-04-24 16:05:17.964271] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338785 ] 00:09:16.954 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.212 Attached to nqn.2016-06.io.spdk:cnode1 00:09:17.212 Namespace ID: 1 size: 1GB 00:09:17.212 fused_ordering(0) 00:09:17.212 fused_ordering(1) 00:09:17.212 fused_ordering(2) 00:09:17.212 fused_ordering(3) 00:09:17.212 fused_ordering(4) 00:09:17.212 fused_ordering(5) 00:09:17.212 fused_ordering(6) 00:09:17.212 fused_ordering(7) 00:09:17.212 fused_ordering(8) 00:09:17.212 fused_ordering(9) 00:09:17.212 fused_ordering(10) 00:09:17.212 fused_ordering(11) 00:09:17.212 fused_ordering(12) 00:09:17.212 fused_ordering(13) 00:09:17.212 fused_ordering(14) 00:09:17.212 fused_ordering(15) 00:09:17.212 fused_ordering(16) 00:09:17.212 fused_ordering(17) 00:09:17.212 fused_ordering(18) 00:09:17.212 fused_ordering(19) 00:09:17.212 fused_ordering(20) 00:09:17.212 fused_ordering(21) 00:09:17.212 fused_ordering(22) 00:09:17.212 fused_ordering(23) 00:09:17.212 fused_ordering(24) 00:09:17.212 fused_ordering(25) 00:09:17.212 fused_ordering(26) 00:09:17.212 fused_ordering(27) 00:09:17.212 fused_ordering(28) 00:09:17.212 fused_ordering(29) 00:09:17.212 fused_ordering(30) 00:09:17.212 fused_ordering(31) 00:09:17.212 fused_ordering(32) 00:09:17.212 fused_ordering(33) 00:09:17.212 fused_ordering(34) 00:09:17.212 fused_ordering(35) 00:09:17.212 fused_ordering(36) 00:09:17.212 fused_ordering(37) 00:09:17.212 fused_ordering(38) 00:09:17.212 fused_ordering(39) 00:09:17.212 fused_ordering(40) 00:09:17.212 fused_ordering(41) 00:09:17.212 fused_ordering(42) 00:09:17.212 fused_ordering(43) 00:09:17.212 fused_ordering(44) 00:09:17.212 fused_ordering(45) 00:09:17.212 fused_ordering(46) 00:09:17.212 fused_ordering(47) 00:09:17.212 fused_ordering(48) 00:09:17.212 fused_ordering(49) 00:09:17.212 fused_ordering(50) 00:09:17.212 fused_ordering(51) 00:09:17.212 fused_ordering(52) 00:09:17.212 fused_ordering(53) 00:09:17.212 fused_ordering(54) 00:09:17.212 fused_ordering(55) 00:09:17.212 fused_ordering(56) 00:09:17.212 fused_ordering(57) 00:09:17.212 fused_ordering(58) 00:09:17.212 fused_ordering(59) 00:09:17.212 fused_ordering(60) 00:09:17.212 fused_ordering(61) 00:09:17.212 fused_ordering(62) 00:09:17.212 fused_ordering(63) 00:09:17.212 fused_ordering(64) 00:09:17.212 fused_ordering(65) 00:09:17.212 fused_ordering(66) 00:09:17.212 fused_ordering(67) 00:09:17.212 fused_ordering(68) 00:09:17.212 fused_ordering(69) 00:09:17.212 fused_ordering(70) 00:09:17.212 fused_ordering(71) 00:09:17.212 fused_ordering(72) 00:09:17.212 fused_ordering(73) 00:09:17.212 fused_ordering(74) 00:09:17.212 fused_ordering(75) 00:09:17.212 fused_ordering(76) 00:09:17.212 fused_ordering(77) 00:09:17.212 fused_ordering(78) 00:09:17.212 fused_ordering(79) 00:09:17.212 fused_ordering(80) 00:09:17.212 fused_ordering(81) 00:09:17.212 fused_ordering(82) 00:09:17.212 fused_ordering(83) 00:09:17.212 fused_ordering(84) 00:09:17.212 fused_ordering(85) 00:09:17.212 fused_ordering(86) 00:09:17.212 fused_ordering(87) 00:09:17.212 fused_ordering(88) 00:09:17.212 fused_ordering(89) 00:09:17.212 fused_ordering(90) 00:09:17.212 fused_ordering(91) 00:09:17.212 fused_ordering(92) 00:09:17.212 fused_ordering(93) 00:09:17.212 fused_ordering(94) 00:09:17.212 fused_ordering(95) 00:09:17.212 fused_ordering(96) 00:09:17.212 fused_ordering(97) 00:09:17.212 fused_ordering(98) 00:09:17.212 fused_ordering(99) 00:09:17.212 fused_ordering(100) 00:09:17.212 fused_ordering(101) 00:09:17.212 fused_ordering(102) 00:09:17.212 fused_ordering(103) 00:09:17.212 fused_ordering(104) 00:09:17.212 fused_ordering(105) 00:09:17.213 fused_ordering(106) 00:09:17.213 fused_ordering(107) 00:09:17.213 fused_ordering(108) 00:09:17.213 fused_ordering(109) 00:09:17.213 fused_ordering(110) 00:09:17.213 fused_ordering(111) 00:09:17.213 fused_ordering(112) 00:09:17.213 fused_ordering(113) 00:09:17.213 fused_ordering(114) 00:09:17.213 fused_ordering(115) 00:09:17.213 fused_ordering(116) 00:09:17.213 fused_ordering(117) 00:09:17.213 fused_ordering(118) 00:09:17.213 fused_ordering(119) 00:09:17.213 fused_ordering(120) 00:09:17.213 fused_ordering(121) 00:09:17.213 fused_ordering(122) 00:09:17.213 fused_ordering(123) 00:09:17.213 fused_ordering(124) 00:09:17.213 fused_ordering(125) 00:09:17.213 fused_ordering(126) 00:09:17.213 fused_ordering(127) 00:09:17.213 fused_ordering(128) 00:09:17.213 fused_ordering(129) 00:09:17.213 fused_ordering(130) 00:09:17.213 fused_ordering(131) 00:09:17.213 fused_ordering(132) 00:09:17.213 fused_ordering(133) 00:09:17.213 fused_ordering(134) 00:09:17.213 fused_ordering(135) 00:09:17.213 fused_ordering(136) 00:09:17.213 fused_ordering(137) 00:09:17.213 fused_ordering(138) 00:09:17.213 fused_ordering(139) 00:09:17.213 fused_ordering(140) 00:09:17.213 fused_ordering(141) 00:09:17.213 fused_ordering(142) 00:09:17.213 fused_ordering(143) 00:09:17.213 fused_ordering(144) 00:09:17.213 fused_ordering(145) 00:09:17.213 fused_ordering(146) 00:09:17.213 fused_ordering(147) 00:09:17.213 fused_ordering(148) 00:09:17.213 fused_ordering(149) 00:09:17.213 fused_ordering(150) 00:09:17.213 fused_ordering(151) 00:09:17.213 fused_ordering(152) 00:09:17.213 fused_ordering(153) 00:09:17.213 fused_ordering(154) 00:09:17.213 fused_ordering(155) 00:09:17.213 fused_ordering(156) 00:09:17.213 fused_ordering(157) 00:09:17.213 fused_ordering(158) 00:09:17.213 fused_ordering(159) 00:09:17.213 fused_ordering(160) 00:09:17.213 fused_ordering(161) 00:09:17.213 fused_ordering(162) 00:09:17.213 fused_ordering(163) 00:09:17.213 fused_ordering(164) 00:09:17.213 fused_ordering(165) 00:09:17.213 fused_ordering(166) 00:09:17.213 fused_ordering(167) 00:09:17.213 fused_ordering(168) 00:09:17.213 fused_ordering(169) 00:09:17.213 fused_ordering(170) 00:09:17.213 fused_ordering(171) 00:09:17.213 fused_ordering(172) 00:09:17.213 fused_ordering(173) 00:09:17.213 fused_ordering(174) 00:09:17.213 fused_ordering(175) 00:09:17.213 fused_ordering(176) 00:09:17.213 fused_ordering(177) 00:09:17.213 fused_ordering(178) 00:09:17.213 fused_ordering(179) 00:09:17.213 fused_ordering(180) 00:09:17.213 fused_ordering(181) 00:09:17.213 fused_ordering(182) 00:09:17.213 fused_ordering(183) 00:09:17.213 fused_ordering(184) 00:09:17.213 fused_ordering(185) 00:09:17.213 fused_ordering(186) 00:09:17.213 fused_ordering(187) 00:09:17.213 fused_ordering(188) 00:09:17.213 fused_ordering(189) 00:09:17.213 fused_ordering(190) 00:09:17.213 fused_ordering(191) 00:09:17.213 fused_ordering(192) 00:09:17.213 fused_ordering(193) 00:09:17.213 fused_ordering(194) 00:09:17.213 fused_ordering(195) 00:09:17.213 fused_ordering(196) 00:09:17.213 fused_ordering(197) 00:09:17.213 fused_ordering(198) 00:09:17.213 fused_ordering(199) 00:09:17.213 fused_ordering(200) 00:09:17.213 fused_ordering(201) 00:09:17.213 fused_ordering(202) 00:09:17.213 fused_ordering(203) 00:09:17.213 fused_ordering(204) 00:09:17.213 fused_ordering(205) 00:09:17.779 fused_ordering(206) 00:09:17.779 fused_ordering(207) 00:09:17.779 fused_ordering(208) 00:09:17.779 fused_ordering(209) 00:09:17.779 fused_ordering(210) 00:09:17.779 fused_ordering(211) 00:09:17.779 fused_ordering(212) 00:09:17.779 fused_ordering(213) 00:09:17.779 fused_ordering(214) 00:09:17.779 fused_ordering(215) 00:09:17.779 fused_ordering(216) 00:09:17.779 fused_ordering(217) 00:09:17.779 fused_ordering(218) 00:09:17.779 fused_ordering(219) 00:09:17.779 fused_ordering(220) 00:09:17.779 fused_ordering(221) 00:09:17.779 fused_ordering(222) 00:09:17.779 fused_ordering(223) 00:09:17.779 fused_ordering(224) 00:09:17.779 fused_ordering(225) 00:09:17.779 fused_ordering(226) 00:09:17.779 fused_ordering(227) 00:09:17.779 fused_ordering(228) 00:09:17.779 fused_ordering(229) 00:09:17.779 fused_ordering(230) 00:09:17.779 fused_ordering(231) 00:09:17.779 fused_ordering(232) 00:09:17.779 fused_ordering(233) 00:09:17.779 fused_ordering(234) 00:09:17.779 fused_ordering(235) 00:09:17.779 fused_ordering(236) 00:09:17.779 fused_ordering(237) 00:09:17.779 fused_ordering(238) 00:09:17.779 fused_ordering(239) 00:09:17.779 fused_ordering(240) 00:09:17.779 fused_ordering(241) 00:09:17.779 fused_ordering(242) 00:09:17.779 fused_ordering(243) 00:09:17.779 fused_ordering(244) 00:09:17.779 fused_ordering(245) 00:09:17.779 fused_ordering(246) 00:09:17.779 fused_ordering(247) 00:09:17.779 fused_ordering(248) 00:09:17.779 fused_ordering(249) 00:09:17.779 fused_ordering(250) 00:09:17.779 fused_ordering(251) 00:09:17.779 fused_ordering(252) 00:09:17.779 fused_ordering(253) 00:09:17.779 fused_ordering(254) 00:09:17.779 fused_ordering(255) 00:09:17.779 fused_ordering(256) 00:09:17.779 fused_ordering(257) 00:09:17.779 fused_ordering(258) 00:09:17.779 fused_ordering(259) 00:09:17.779 fused_ordering(260) 00:09:17.779 fused_ordering(261) 00:09:17.779 fused_ordering(262) 00:09:17.779 fused_ordering(263) 00:09:17.779 fused_ordering(264) 00:09:17.779 fused_ordering(265) 00:09:17.779 fused_ordering(266) 00:09:17.779 fused_ordering(267) 00:09:17.779 fused_ordering(268) 00:09:17.779 fused_ordering(269) 00:09:17.779 fused_ordering(270) 00:09:17.779 fused_ordering(271) 00:09:17.779 fused_ordering(272) 00:09:17.779 fused_ordering(273) 00:09:17.779 fused_ordering(274) 00:09:17.779 fused_ordering(275) 00:09:17.779 fused_ordering(276) 00:09:17.779 fused_ordering(277) 00:09:17.779 fused_ordering(278) 00:09:17.779 fused_ordering(279) 00:09:17.779 fused_ordering(280) 00:09:17.779 fused_ordering(281) 00:09:17.779 fused_ordering(282) 00:09:17.779 fused_ordering(283) 00:09:17.779 fused_ordering(284) 00:09:17.779 fused_ordering(285) 00:09:17.779 fused_ordering(286) 00:09:17.779 fused_ordering(287) 00:09:17.779 fused_ordering(288) 00:09:17.779 fused_ordering(289) 00:09:17.779 fused_ordering(290) 00:09:17.779 fused_ordering(291) 00:09:17.779 fused_ordering(292) 00:09:17.779 fused_ordering(293) 00:09:17.779 fused_ordering(294) 00:09:17.779 fused_ordering(295) 00:09:17.779 fused_ordering(296) 00:09:17.779 fused_ordering(297) 00:09:17.779 fused_ordering(298) 00:09:17.779 fused_ordering(299) 00:09:17.779 fused_ordering(300) 00:09:17.779 fused_ordering(301) 00:09:17.779 fused_ordering(302) 00:09:17.779 fused_ordering(303) 00:09:17.779 fused_ordering(304) 00:09:17.779 fused_ordering(305) 00:09:17.779 fused_ordering(306) 00:09:17.779 fused_ordering(307) 00:09:17.779 fused_ordering(308) 00:09:17.779 fused_ordering(309) 00:09:17.779 fused_ordering(310) 00:09:17.779 fused_ordering(311) 00:09:17.779 fused_ordering(312) 00:09:17.779 fused_ordering(313) 00:09:17.779 fused_ordering(314) 00:09:17.779 fused_ordering(315) 00:09:17.779 fused_ordering(316) 00:09:17.779 fused_ordering(317) 00:09:17.779 fused_ordering(318) 00:09:17.779 fused_ordering(319) 00:09:17.779 fused_ordering(320) 00:09:17.779 fused_ordering(321) 00:09:17.779 fused_ordering(322) 00:09:17.779 fused_ordering(323) 00:09:17.779 fused_ordering(324) 00:09:17.779 fused_ordering(325) 00:09:17.779 fused_ordering(326) 00:09:17.779 fused_ordering(327) 00:09:17.779 fused_ordering(328) 00:09:17.779 fused_ordering(329) 00:09:17.779 fused_ordering(330) 00:09:17.779 fused_ordering(331) 00:09:17.779 fused_ordering(332) 00:09:17.779 fused_ordering(333) 00:09:17.779 fused_ordering(334) 00:09:17.779 fused_ordering(335) 00:09:17.779 fused_ordering(336) 00:09:17.779 fused_ordering(337) 00:09:17.779 fused_ordering(338) 00:09:17.779 fused_ordering(339) 00:09:17.779 fused_ordering(340) 00:09:17.779 fused_ordering(341) 00:09:17.779 fused_ordering(342) 00:09:17.779 fused_ordering(343) 00:09:17.779 fused_ordering(344) 00:09:17.779 fused_ordering(345) 00:09:17.779 fused_ordering(346) 00:09:17.779 fused_ordering(347) 00:09:17.779 fused_ordering(348) 00:09:17.779 fused_ordering(349) 00:09:17.779 fused_ordering(350) 00:09:17.779 fused_ordering(351) 00:09:17.779 fused_ordering(352) 00:09:17.779 fused_ordering(353) 00:09:17.779 fused_ordering(354) 00:09:17.779 fused_ordering(355) 00:09:17.779 fused_ordering(356) 00:09:17.779 fused_ordering(357) 00:09:17.779 fused_ordering(358) 00:09:17.779 fused_ordering(359) 00:09:17.779 fused_ordering(360) 00:09:17.779 fused_ordering(361) 00:09:17.779 fused_ordering(362) 00:09:17.779 fused_ordering(363) 00:09:17.779 fused_ordering(364) 00:09:17.779 fused_ordering(365) 00:09:17.779 fused_ordering(366) 00:09:17.779 fused_ordering(367) 00:09:17.779 fused_ordering(368) 00:09:17.779 fused_ordering(369) 00:09:17.779 fused_ordering(370) 00:09:17.779 fused_ordering(371) 00:09:17.779 fused_ordering(372) 00:09:17.779 fused_ordering(373) 00:09:17.779 fused_ordering(374) 00:09:17.779 fused_ordering(375) 00:09:17.779 fused_ordering(376) 00:09:17.779 fused_ordering(377) 00:09:17.779 fused_ordering(378) 00:09:17.779 fused_ordering(379) 00:09:17.779 fused_ordering(380) 00:09:17.779 fused_ordering(381) 00:09:17.779 fused_ordering(382) 00:09:17.779 fused_ordering(383) 00:09:17.779 fused_ordering(384) 00:09:17.779 fused_ordering(385) 00:09:17.779 fused_ordering(386) 00:09:17.779 fused_ordering(387) 00:09:17.779 fused_ordering(388) 00:09:17.779 fused_ordering(389) 00:09:17.779 fused_ordering(390) 00:09:17.779 fused_ordering(391) 00:09:17.779 fused_ordering(392) 00:09:17.779 fused_ordering(393) 00:09:17.779 fused_ordering(394) 00:09:17.779 fused_ordering(395) 00:09:17.779 fused_ordering(396) 00:09:17.779 fused_ordering(397) 00:09:17.779 fused_ordering(398) 00:09:17.779 fused_ordering(399) 00:09:17.779 fused_ordering(400) 00:09:17.779 fused_ordering(401) 00:09:17.779 fused_ordering(402) 00:09:17.779 fused_ordering(403) 00:09:17.779 fused_ordering(404) 00:09:17.779 fused_ordering(405) 00:09:17.779 fused_ordering(406) 00:09:17.779 fused_ordering(407) 00:09:17.779 fused_ordering(408) 00:09:17.779 fused_ordering(409) 00:09:17.780 fused_ordering(410) 00:09:18.345 fused_ordering(411) 00:09:18.345 fused_ordering(412) 00:09:18.345 fused_ordering(413) 00:09:18.345 fused_ordering(414) 00:09:18.345 fused_ordering(415) 00:09:18.345 fused_ordering(416) 00:09:18.345 fused_ordering(417) 00:09:18.345 fused_ordering(418) 00:09:18.345 fused_ordering(419) 00:09:18.345 fused_ordering(420) 00:09:18.345 fused_ordering(421) 00:09:18.345 fused_ordering(422) 00:09:18.345 fused_ordering(423) 00:09:18.345 fused_ordering(424) 00:09:18.345 fused_ordering(425) 00:09:18.345 fused_ordering(426) 00:09:18.345 fused_ordering(427) 00:09:18.345 fused_ordering(428) 00:09:18.345 fused_ordering(429) 00:09:18.345 fused_ordering(430) 00:09:18.345 fused_ordering(431) 00:09:18.345 fused_ordering(432) 00:09:18.345 fused_ordering(433) 00:09:18.345 fused_ordering(434) 00:09:18.345 fused_ordering(435) 00:09:18.345 fused_ordering(436) 00:09:18.345 fused_ordering(437) 00:09:18.345 fused_ordering(438) 00:09:18.345 fused_ordering(439) 00:09:18.345 fused_ordering(440) 00:09:18.345 fused_ordering(441) 00:09:18.345 fused_ordering(442) 00:09:18.346 fused_ordering(443) 00:09:18.346 fused_ordering(444) 00:09:18.346 fused_ordering(445) 00:09:18.346 fused_ordering(446) 00:09:18.346 fused_ordering(447) 00:09:18.346 fused_ordering(448) 00:09:18.346 fused_ordering(449) 00:09:18.346 fused_ordering(450) 00:09:18.346 fused_ordering(451) 00:09:18.346 fused_ordering(452) 00:09:18.346 fused_ordering(453) 00:09:18.346 fused_ordering(454) 00:09:18.346 fused_ordering(455) 00:09:18.346 fused_ordering(456) 00:09:18.346 fused_ordering(457) 00:09:18.346 fused_ordering(458) 00:09:18.346 fused_ordering(459) 00:09:18.346 fused_ordering(460) 00:09:18.346 fused_ordering(461) 00:09:18.346 fused_ordering(462) 00:09:18.346 fused_ordering(463) 00:09:18.346 fused_ordering(464) 00:09:18.346 fused_ordering(465) 00:09:18.346 fused_ordering(466) 00:09:18.346 fused_ordering(467) 00:09:18.346 fused_ordering(468) 00:09:18.346 fused_ordering(469) 00:09:18.346 fused_ordering(470) 00:09:18.346 fused_ordering(471) 00:09:18.346 fused_ordering(472) 00:09:18.346 fused_ordering(473) 00:09:18.346 fused_ordering(474) 00:09:18.346 fused_ordering(475) 00:09:18.346 fused_ordering(476) 00:09:18.346 fused_ordering(477) 00:09:18.346 fused_ordering(478) 00:09:18.346 fused_ordering(479) 00:09:18.346 fused_ordering(480) 00:09:18.346 fused_ordering(481) 00:09:18.346 fused_ordering(482) 00:09:18.346 fused_ordering(483) 00:09:18.346 fused_ordering(484) 00:09:18.346 fused_ordering(485) 00:09:18.346 fused_ordering(486) 00:09:18.346 fused_ordering(487) 00:09:18.346 fused_ordering(488) 00:09:18.346 fused_ordering(489) 00:09:18.346 fused_ordering(490) 00:09:18.346 fused_ordering(491) 00:09:18.346 fused_ordering(492) 00:09:18.346 fused_ordering(493) 00:09:18.346 fused_ordering(494) 00:09:18.346 fused_ordering(495) 00:09:18.346 fused_ordering(496) 00:09:18.346 fused_ordering(497) 00:09:18.346 fused_ordering(498) 00:09:18.346 fused_ordering(499) 00:09:18.346 fused_ordering(500) 00:09:18.346 fused_ordering(501) 00:09:18.346 fused_ordering(502) 00:09:18.346 fused_ordering(503) 00:09:18.346 fused_ordering(504) 00:09:18.346 fused_ordering(505) 00:09:18.346 fused_ordering(506) 00:09:18.346 fused_ordering(507) 00:09:18.346 fused_ordering(508) 00:09:18.346 fused_ordering(509) 00:09:18.346 fused_ordering(510) 00:09:18.346 fused_ordering(511) 00:09:18.346 fused_ordering(512) 00:09:18.346 fused_ordering(513) 00:09:18.346 fused_ordering(514) 00:09:18.346 fused_ordering(515) 00:09:18.346 fused_ordering(516) 00:09:18.346 fused_ordering(517) 00:09:18.346 fused_ordering(518) 00:09:18.346 fused_ordering(519) 00:09:18.346 fused_ordering(520) 00:09:18.346 fused_ordering(521) 00:09:18.346 fused_ordering(522) 00:09:18.346 fused_ordering(523) 00:09:18.346 fused_ordering(524) 00:09:18.346 fused_ordering(525) 00:09:18.346 fused_ordering(526) 00:09:18.346 fused_ordering(527) 00:09:18.346 fused_ordering(528) 00:09:18.346 fused_ordering(529) 00:09:18.346 fused_ordering(530) 00:09:18.346 fused_ordering(531) 00:09:18.346 fused_ordering(532) 00:09:18.346 fused_ordering(533) 00:09:18.346 fused_ordering(534) 00:09:18.346 fused_ordering(535) 00:09:18.346 fused_ordering(536) 00:09:18.346 fused_ordering(537) 00:09:18.346 fused_ordering(538) 00:09:18.346 fused_ordering(539) 00:09:18.346 fused_ordering(540) 00:09:18.346 fused_ordering(541) 00:09:18.346 fused_ordering(542) 00:09:18.346 fused_ordering(543) 00:09:18.346 fused_ordering(544) 00:09:18.346 fused_ordering(545) 00:09:18.346 fused_ordering(546) 00:09:18.346 fused_ordering(547) 00:09:18.346 fused_ordering(548) 00:09:18.346 fused_ordering(549) 00:09:18.346 fused_ordering(550) 00:09:18.346 fused_ordering(551) 00:09:18.346 fused_ordering(552) 00:09:18.346 fused_ordering(553) 00:09:18.346 fused_ordering(554) 00:09:18.346 fused_ordering(555) 00:09:18.346 fused_ordering(556) 00:09:18.346 fused_ordering(557) 00:09:18.346 fused_ordering(558) 00:09:18.346 fused_ordering(559) 00:09:18.346 fused_ordering(560) 00:09:18.346 fused_ordering(561) 00:09:18.346 fused_ordering(562) 00:09:18.346 fused_ordering(563) 00:09:18.346 fused_ordering(564) 00:09:18.346 fused_ordering(565) 00:09:18.346 fused_ordering(566) 00:09:18.346 fused_ordering(567) 00:09:18.346 fused_ordering(568) 00:09:18.346 fused_ordering(569) 00:09:18.346 fused_ordering(570) 00:09:18.346 fused_ordering(571) 00:09:18.346 fused_ordering(572) 00:09:18.346 fused_ordering(573) 00:09:18.346 fused_ordering(574) 00:09:18.346 fused_ordering(575) 00:09:18.346 fused_ordering(576) 00:09:18.346 fused_ordering(577) 00:09:18.346 fused_ordering(578) 00:09:18.346 fused_ordering(579) 00:09:18.346 fused_ordering(580) 00:09:18.346 fused_ordering(581) 00:09:18.346 fused_ordering(582) 00:09:18.346 fused_ordering(583) 00:09:18.346 fused_ordering(584) 00:09:18.346 fused_ordering(585) 00:09:18.346 fused_ordering(586) 00:09:18.346 fused_ordering(587) 00:09:18.346 fused_ordering(588) 00:09:18.346 fused_ordering(589) 00:09:18.346 fused_ordering(590) 00:09:18.346 fused_ordering(591) 00:09:18.346 fused_ordering(592) 00:09:18.346 fused_ordering(593) 00:09:18.346 fused_ordering(594) 00:09:18.346 fused_ordering(595) 00:09:18.346 fused_ordering(596) 00:09:18.346 fused_ordering(597) 00:09:18.346 fused_ordering(598) 00:09:18.346 fused_ordering(599) 00:09:18.346 fused_ordering(600) 00:09:18.346 fused_ordering(601) 00:09:18.346 fused_ordering(602) 00:09:18.346 fused_ordering(603) 00:09:18.346 fused_ordering(604) 00:09:18.347 fused_ordering(605) 00:09:18.347 fused_ordering(606) 00:09:18.347 fused_ordering(607) 00:09:18.347 fused_ordering(608) 00:09:18.347 fused_ordering(609) 00:09:18.347 fused_ordering(610) 00:09:18.347 fused_ordering(611) 00:09:18.347 fused_ordering(612) 00:09:18.347 fused_ordering(613) 00:09:18.347 fused_ordering(614) 00:09:18.347 fused_ordering(615) 00:09:18.913 fused_ordering(616) 00:09:18.913 fused_ordering(617) 00:09:18.913 fused_ordering(618) 00:09:18.913 fused_ordering(619) 00:09:18.913 fused_ordering(620) 00:09:18.913 fused_ordering(621) 00:09:18.913 fused_ordering(622) 00:09:18.913 fused_ordering(623) 00:09:18.913 fused_ordering(624) 00:09:18.913 fused_ordering(625) 00:09:18.913 fused_ordering(626) 00:09:18.913 fused_ordering(627) 00:09:18.913 fused_ordering(628) 00:09:18.913 fused_ordering(629) 00:09:18.913 fused_ordering(630) 00:09:18.913 fused_ordering(631) 00:09:18.913 fused_ordering(632) 00:09:18.913 fused_ordering(633) 00:09:18.913 fused_ordering(634) 00:09:18.913 fused_ordering(635) 00:09:18.913 fused_ordering(636) 00:09:18.913 fused_ordering(637) 00:09:18.913 fused_ordering(638) 00:09:18.913 fused_ordering(639) 00:09:18.913 fused_ordering(640) 00:09:18.913 fused_ordering(641) 00:09:18.913 fused_ordering(642) 00:09:18.913 fused_ordering(643) 00:09:18.913 fused_ordering(644) 00:09:18.913 fused_ordering(645) 00:09:18.913 fused_ordering(646) 00:09:18.913 fused_ordering(647) 00:09:18.913 fused_ordering(648) 00:09:18.913 fused_ordering(649) 00:09:18.913 fused_ordering(650) 00:09:18.913 fused_ordering(651) 00:09:18.913 fused_ordering(652) 00:09:18.913 fused_ordering(653) 00:09:18.913 fused_ordering(654) 00:09:18.913 fused_ordering(655) 00:09:18.913 fused_ordering(656) 00:09:18.913 fused_ordering(657) 00:09:18.913 fused_ordering(658) 00:09:18.913 fused_ordering(659) 00:09:18.913 fused_ordering(660) 00:09:18.913 fused_ordering(661) 00:09:18.913 fused_ordering(662) 00:09:18.913 fused_ordering(663) 00:09:18.913 fused_ordering(664) 00:09:18.913 fused_ordering(665) 00:09:18.913 fused_ordering(666) 00:09:18.913 fused_ordering(667) 00:09:18.913 fused_ordering(668) 00:09:18.913 fused_ordering(669) 00:09:18.913 fused_ordering(670) 00:09:18.913 fused_ordering(671) 00:09:18.913 fused_ordering(672) 00:09:18.913 fused_ordering(673) 00:09:18.913 fused_ordering(674) 00:09:18.913 fused_ordering(675) 00:09:18.913 fused_ordering(676) 00:09:18.913 fused_ordering(677) 00:09:18.913 fused_ordering(678) 00:09:18.913 fused_ordering(679) 00:09:18.913 fused_ordering(680) 00:09:18.913 fused_ordering(681) 00:09:18.913 fused_ordering(682) 00:09:18.913 fused_ordering(683) 00:09:18.913 fused_ordering(684) 00:09:18.913 fused_ordering(685) 00:09:18.913 fused_ordering(686) 00:09:18.913 fused_ordering(687) 00:09:18.913 fused_ordering(688) 00:09:18.913 fused_ordering(689) 00:09:18.913 fused_ordering(690) 00:09:18.913 fused_ordering(691) 00:09:18.913 fused_ordering(692) 00:09:18.913 fused_ordering(693) 00:09:18.913 fused_ordering(694) 00:09:18.913 fused_ordering(695) 00:09:18.913 fused_ordering(696) 00:09:18.913 fused_ordering(697) 00:09:18.913 fused_ordering(698) 00:09:18.913 fused_ordering(699) 00:09:18.913 fused_ordering(700) 00:09:18.913 fused_ordering(701) 00:09:18.913 fused_ordering(702) 00:09:18.913 fused_ordering(703) 00:09:18.913 fused_ordering(704) 00:09:18.913 fused_ordering(705) 00:09:18.913 fused_ordering(706) 00:09:18.913 fused_ordering(707) 00:09:18.913 fused_ordering(708) 00:09:18.913 fused_ordering(709) 00:09:18.913 fused_ordering(710) 00:09:18.913 fused_ordering(711) 00:09:18.913 fused_ordering(712) 00:09:18.913 fused_ordering(713) 00:09:18.913 fused_ordering(714) 00:09:18.913 fused_ordering(715) 00:09:18.913 fused_ordering(716) 00:09:18.913 fused_ordering(717) 00:09:18.913 fused_ordering(718) 00:09:18.913 fused_ordering(719) 00:09:18.913 fused_ordering(720) 00:09:18.913 fused_ordering(721) 00:09:18.913 fused_ordering(722) 00:09:18.913 fused_ordering(723) 00:09:18.913 fused_ordering(724) 00:09:18.913 fused_ordering(725) 00:09:18.913 fused_ordering(726) 00:09:18.913 fused_ordering(727) 00:09:18.913 fused_ordering(728) 00:09:18.913 fused_ordering(729) 00:09:18.913 fused_ordering(730) 00:09:18.913 fused_ordering(731) 00:09:18.913 fused_ordering(732) 00:09:18.913 fused_ordering(733) 00:09:18.913 fused_ordering(734) 00:09:18.913 fused_ordering(735) 00:09:18.913 fused_ordering(736) 00:09:18.913 fused_ordering(737) 00:09:18.913 fused_ordering(738) 00:09:18.913 fused_ordering(739) 00:09:18.913 fused_ordering(740) 00:09:18.913 fused_ordering(741) 00:09:18.913 fused_ordering(742) 00:09:18.913 fused_ordering(743) 00:09:18.913 fused_ordering(744) 00:09:18.913 fused_ordering(745) 00:09:18.913 fused_ordering(746) 00:09:18.913 fused_ordering(747) 00:09:18.913 fused_ordering(748) 00:09:18.913 fused_ordering(749) 00:09:18.913 fused_ordering(750) 00:09:18.913 fused_ordering(751) 00:09:18.913 fused_ordering(752) 00:09:18.913 fused_ordering(753) 00:09:18.913 fused_ordering(754) 00:09:18.913 fused_ordering(755) 00:09:18.913 fused_ordering(756) 00:09:18.913 fused_ordering(757) 00:09:18.913 fused_ordering(758) 00:09:18.913 fused_ordering(759) 00:09:18.913 fused_ordering(760) 00:09:18.913 fused_ordering(761) 00:09:18.913 fused_ordering(762) 00:09:18.913 fused_ordering(763) 00:09:18.913 fused_ordering(764) 00:09:18.913 fused_ordering(765) 00:09:18.913 fused_ordering(766) 00:09:18.913 fused_ordering(767) 00:09:18.913 fused_ordering(768) 00:09:18.913 fused_ordering(769) 00:09:18.913 fused_ordering(770) 00:09:18.913 fused_ordering(771) 00:09:18.913 fused_ordering(772) 00:09:18.913 fused_ordering(773) 00:09:18.913 fused_ordering(774) 00:09:18.913 fused_ordering(775) 00:09:18.913 fused_ordering(776) 00:09:18.913 fused_ordering(777) 00:09:18.913 fused_ordering(778) 00:09:18.913 fused_ordering(779) 00:09:18.913 fused_ordering(780) 00:09:18.913 fused_ordering(781) 00:09:18.913 fused_ordering(782) 00:09:18.913 fused_ordering(783) 00:09:18.913 fused_ordering(784) 00:09:18.913 fused_ordering(785) 00:09:18.913 fused_ordering(786) 00:09:18.913 fused_ordering(787) 00:09:18.913 fused_ordering(788) 00:09:18.913 fused_ordering(789) 00:09:18.913 fused_ordering(790) 00:09:18.913 fused_ordering(791) 00:09:18.913 fused_ordering(792) 00:09:18.913 fused_ordering(793) 00:09:18.913 fused_ordering(794) 00:09:18.913 fused_ordering(795) 00:09:18.913 fused_ordering(796) 00:09:18.913 fused_ordering(797) 00:09:18.913 fused_ordering(798) 00:09:18.913 fused_ordering(799) 00:09:18.913 fused_ordering(800) 00:09:18.913 fused_ordering(801) 00:09:18.913 fused_ordering(802) 00:09:18.913 fused_ordering(803) 00:09:18.914 fused_ordering(804) 00:09:18.914 fused_ordering(805) 00:09:18.914 fused_ordering(806) 00:09:18.914 fused_ordering(807) 00:09:18.914 fused_ordering(808) 00:09:18.914 fused_ordering(809) 00:09:18.914 fused_ordering(810) 00:09:18.914 fused_ordering(811) 00:09:18.914 fused_ordering(812) 00:09:18.914 fused_ordering(813) 00:09:18.914 fused_ordering(814) 00:09:18.914 fused_ordering(815) 00:09:18.914 fused_ordering(816) 00:09:18.914 fused_ordering(817) 00:09:18.914 fused_ordering(818) 00:09:18.914 fused_ordering(819) 00:09:18.914 fused_ordering(820) 00:09:19.845 fused_ordering(821) 00:09:19.845 fused_ordering(822) 00:09:19.845 fused_ordering(823) 00:09:19.845 fused_ordering(824) 00:09:19.845 fused_ordering(825) 00:09:19.845 fused_ordering(826) 00:09:19.845 fused_ordering(827) 00:09:19.845 fused_ordering(828) 00:09:19.845 fused_ordering(829) 00:09:19.845 fused_ordering(830) 00:09:19.845 fused_ordering(831) 00:09:19.845 fused_ordering(832) 00:09:19.845 fused_ordering(833) 00:09:19.845 fused_ordering(834) 00:09:19.845 fused_ordering(835) 00:09:19.845 fused_ordering(836) 00:09:19.845 fused_ordering(837) 00:09:19.845 fused_ordering(838) 00:09:19.845 fused_ordering(839) 00:09:19.845 fused_ordering(840) 00:09:19.845 fused_ordering(841) 00:09:19.845 fused_ordering(842) 00:09:19.845 fused_ordering(843) 00:09:19.845 fused_ordering(844) 00:09:19.845 fused_ordering(845) 00:09:19.845 fused_ordering(846) 00:09:19.846 fused_ordering(847) 00:09:19.846 fused_ordering(848) 00:09:19.846 fused_ordering(849) 00:09:19.846 fused_ordering(850) 00:09:19.846 fused_ordering(851) 00:09:19.846 fused_ordering(852) 00:09:19.846 fused_ordering(853) 00:09:19.846 fused_ordering(854) 00:09:19.846 fused_ordering(855) 00:09:19.846 fused_ordering(856) 00:09:19.846 fused_ordering(857) 00:09:19.846 fused_ordering(858) 00:09:19.846 fused_ordering(859) 00:09:19.846 fused_ordering(860) 00:09:19.846 fused_ordering(861) 00:09:19.846 fused_ordering(862) 00:09:19.846 fused_ordering(863) 00:09:19.846 fused_ordering(864) 00:09:19.846 fused_ordering(865) 00:09:19.846 fused_ordering(866) 00:09:19.846 fused_ordering(867) 00:09:19.846 fused_ordering(868) 00:09:19.846 fused_ordering(869) 00:09:19.846 fused_ordering(870) 00:09:19.846 fused_ordering(871) 00:09:19.846 fused_ordering(872) 00:09:19.846 fused_ordering(873) 00:09:19.846 fused_ordering(874) 00:09:19.846 fused_ordering(875) 00:09:19.846 fused_ordering(876) 00:09:19.846 fused_ordering(877) 00:09:19.846 fused_ordering(878) 00:09:19.846 fused_ordering(879) 00:09:19.846 fused_ordering(880) 00:09:19.846 fused_ordering(881) 00:09:19.846 fused_ordering(882) 00:09:19.846 fused_ordering(883) 00:09:19.846 fused_ordering(884) 00:09:19.846 fused_ordering(885) 00:09:19.846 fused_ordering(886) 00:09:19.846 fused_ordering(887) 00:09:19.846 fused_ordering(888) 00:09:19.846 fused_ordering(889) 00:09:19.846 fused_ordering(890) 00:09:19.846 fused_ordering(891) 00:09:19.846 fused_ordering(892) 00:09:19.846 fused_ordering(893) 00:09:19.846 fused_ordering(894) 00:09:19.846 fused_ordering(895) 00:09:19.846 fused_ordering(896) 00:09:19.846 fused_ordering(897) 00:09:19.846 fused_ordering(898) 00:09:19.846 fused_ordering(899) 00:09:19.846 fused_ordering(900) 00:09:19.846 fused_ordering(901) 00:09:19.846 fused_ordering(902) 00:09:19.846 fused_ordering(903) 00:09:19.846 fused_ordering(904) 00:09:19.846 fused_ordering(905) 00:09:19.846 fused_ordering(906) 00:09:19.846 fused_ordering(907) 00:09:19.846 fused_ordering(908) 00:09:19.846 fused_ordering(909) 00:09:19.846 fused_ordering(910) 00:09:19.846 fused_ordering(911) 00:09:19.846 fused_ordering(912) 00:09:19.846 fused_ordering(913) 00:09:19.846 fused_ordering(914) 00:09:19.846 fused_ordering(915) 00:09:19.846 fused_ordering(916) 00:09:19.846 fused_ordering(917) 00:09:19.846 fused_ordering(918) 00:09:19.846 fused_ordering(919) 00:09:19.846 fused_ordering(920) 00:09:19.846 fused_ordering(921) 00:09:19.846 fused_ordering(922) 00:09:19.846 fused_ordering(923) 00:09:19.846 fused_ordering(924) 00:09:19.846 fused_ordering(925) 00:09:19.846 fused_ordering(926) 00:09:19.846 fused_ordering(927) 00:09:19.846 fused_ordering(928) 00:09:19.846 fused_ordering(929) 00:09:19.846 fused_ordering(930) 00:09:19.846 fused_ordering(931) 00:09:19.846 fused_ordering(932) 00:09:19.846 fused_ordering(933) 00:09:19.846 fused_ordering(934) 00:09:19.846 fused_ordering(935) 00:09:19.846 fused_ordering(936) 00:09:19.846 fused_ordering(937) 00:09:19.846 fused_ordering(938) 00:09:19.846 fused_ordering(939) 00:09:19.846 fused_ordering(940) 00:09:19.846 fused_ordering(941) 00:09:19.846 fused_ordering(942) 00:09:19.846 fused_ordering(943) 00:09:19.846 fused_ordering(944) 00:09:19.846 fused_ordering(945) 00:09:19.846 fused_ordering(946) 00:09:19.846 fused_ordering(947) 00:09:19.846 fused_ordering(948) 00:09:19.846 fused_ordering(949) 00:09:19.846 fused_ordering(950) 00:09:19.846 fused_ordering(951) 00:09:19.846 fused_ordering(952) 00:09:19.846 fused_ordering(953) 00:09:19.846 fused_ordering(954) 00:09:19.846 fused_ordering(955) 00:09:19.846 fused_ordering(956) 00:09:19.846 fused_ordering(957) 00:09:19.846 fused_ordering(958) 00:09:19.846 fused_ordering(959) 00:09:19.846 fused_ordering(960) 00:09:19.846 fused_ordering(961) 00:09:19.846 fused_ordering(962) 00:09:19.846 fused_ordering(963) 00:09:19.846 fused_ordering(964) 00:09:19.846 fused_ordering(965) 00:09:19.846 fused_ordering(966) 00:09:19.846 fused_ordering(967) 00:09:19.846 fused_ordering(968) 00:09:19.846 fused_ordering(969) 00:09:19.846 fused_ordering(970) 00:09:19.846 fused_ordering(971) 00:09:19.846 fused_ordering(972) 00:09:19.846 fused_ordering(973) 00:09:19.846 fused_ordering(974) 00:09:19.846 fused_ordering(975) 00:09:19.846 fused_ordering(976) 00:09:19.846 fused_ordering(977) 00:09:19.846 fused_ordering(978) 00:09:19.846 fused_ordering(979) 00:09:19.846 fused_ordering(980) 00:09:19.846 fused_ordering(981) 00:09:19.846 fused_ordering(982) 00:09:19.846 fused_ordering(983) 00:09:19.846 fused_ordering(984) 00:09:19.846 fused_ordering(985) 00:09:19.846 fused_ordering(986) 00:09:19.846 fused_ordering(987) 00:09:19.846 fused_ordering(988) 00:09:19.846 fused_ordering(989) 00:09:19.846 fused_ordering(990) 00:09:19.846 fused_ordering(991) 00:09:19.846 fused_ordering(992) 00:09:19.846 fused_ordering(993) 00:09:19.846 fused_ordering(994) 00:09:19.846 fused_ordering(995) 00:09:19.846 fused_ordering(996) 00:09:19.846 fused_ordering(997) 00:09:19.846 fused_ordering(998) 00:09:19.846 fused_ordering(999) 00:09:19.846 fused_ordering(1000) 00:09:19.846 fused_ordering(1001) 00:09:19.846 fused_ordering(1002) 00:09:19.846 fused_ordering(1003) 00:09:19.846 fused_ordering(1004) 00:09:19.846 fused_ordering(1005) 00:09:19.846 fused_ordering(1006) 00:09:19.846 fused_ordering(1007) 00:09:19.846 fused_ordering(1008) 00:09:19.846 fused_ordering(1009) 00:09:19.846 fused_ordering(1010) 00:09:19.846 fused_ordering(1011) 00:09:19.846 fused_ordering(1012) 00:09:19.846 fused_ordering(1013) 00:09:19.846 fused_ordering(1014) 00:09:19.846 fused_ordering(1015) 00:09:19.846 fused_ordering(1016) 00:09:19.846 fused_ordering(1017) 00:09:19.846 fused_ordering(1018) 00:09:19.846 fused_ordering(1019) 00:09:19.846 fused_ordering(1020) 00:09:19.846 fused_ordering(1021) 00:09:19.846 fused_ordering(1022) 00:09:19.846 fused_ordering(1023) 00:09:19.846 16:05:20 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:19.846 16:05:20 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:19.846 16:05:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:19.846 16:05:20 -- nvmf/common.sh@117 -- # sync 00:09:19.846 16:05:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.846 16:05:20 -- nvmf/common.sh@120 -- # set +e 00:09:19.846 16:05:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.846 16:05:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.846 rmmod nvme_tcp 00:09:19.846 rmmod nvme_fabrics 00:09:19.846 rmmod nvme_keyring 00:09:19.846 16:05:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.846 16:05:21 -- nvmf/common.sh@124 -- # set -e 00:09:19.846 16:05:21 -- nvmf/common.sh@125 -- # return 0 00:09:19.846 16:05:21 -- nvmf/common.sh@478 -- # '[' -n 3338649 ']' 00:09:19.846 16:05:21 -- nvmf/common.sh@479 -- # killprocess 3338649 00:09:19.846 16:05:21 -- common/autotest_common.sh@936 -- # '[' -z 3338649 ']' 00:09:19.846 16:05:21 -- common/autotest_common.sh@940 -- # kill -0 3338649 00:09:19.846 16:05:21 -- common/autotest_common.sh@941 -- # uname 00:09:19.846 16:05:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.846 16:05:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3338649 00:09:19.846 16:05:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:19.846 16:05:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:19.846 16:05:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3338649' 00:09:19.846 killing process with pid 3338649 00:09:19.846 16:05:21 -- common/autotest_common.sh@955 -- # kill 3338649 00:09:19.846 16:05:21 -- common/autotest_common.sh@960 -- # wait 3338649 00:09:20.104 16:05:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:20.104 16:05:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:20.104 16:05:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:20.104 16:05:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.104 16:05:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.104 16:05:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.104 16:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.104 16:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.639 16:05:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.639 00:09:22.639 real 0m8.012s 00:09:22.639 user 0m5.459s 00:09:22.639 sys 0m3.804s 00:09:22.639 16:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.639 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:09:22.639 ************************************ 00:09:22.639 END TEST nvmf_fused_ordering 00:09:22.639 ************************************ 00:09:22.639 16:05:23 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.639 16:05:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:22.639 16:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.639 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:09:22.639 ************************************ 00:09:22.639 START TEST nvmf_delete_subsystem 00:09:22.639 ************************************ 00:09:22.639 16:05:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.639 * Looking for test storage... 00:09:22.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.639 16:05:23 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.639 16:05:23 -- nvmf/common.sh@7 -- # uname -s 00:09:22.639 16:05:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.639 16:05:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.639 16:05:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.639 16:05:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.639 16:05:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.639 16:05:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.639 16:05:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.639 16:05:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.639 16:05:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.639 16:05:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.639 16:05:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.639 16:05:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.639 16:05:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.639 16:05:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.639 16:05:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.639 16:05:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.639 16:05:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.639 16:05:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.639 16:05:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.639 16:05:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.639 16:05:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.639 16:05:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.639 16:05:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.639 16:05:23 -- paths/export.sh@5 -- # export PATH 00:09:22.639 16:05:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.639 16:05:23 -- nvmf/common.sh@47 -- # : 0 00:09:22.639 16:05:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.639 16:05:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.639 16:05:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.639 16:05:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.639 16:05:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.639 16:05:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.639 16:05:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.639 16:05:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.639 16:05:23 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:22.639 16:05:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:22.639 16:05:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.639 16:05:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:22.639 16:05:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:22.639 16:05:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:22.639 16:05:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.639 16:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.639 16:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.639 16:05:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:22.639 16:05:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:22.639 16:05:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.639 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:09:24.539 16:05:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:24.539 16:05:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.539 16:05:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.539 16:05:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.539 16:05:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.539 16:05:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.539 16:05:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.539 16:05:25 -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.539 16:05:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.539 16:05:25 -- nvmf/common.sh@296 -- # e810=() 00:09:24.539 16:05:25 -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.539 16:05:25 -- nvmf/common.sh@297 -- # x722=() 00:09:24.539 16:05:25 -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.539 16:05:25 -- nvmf/common.sh@298 -- # mlx=() 00:09:24.539 16:05:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.539 16:05:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.539 16:05:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.540 16:05:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.540 16:05:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.540 16:05:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.540 16:05:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.540 16:05:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:24.540 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:24.540 16:05:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.540 16:05:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:24.540 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:24.540 16:05:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.540 16:05:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.540 16:05:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.540 16:05:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:24.540 Found net devices under 0000:09:00.0: cvl_0_0 00:09:24.540 16:05:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.540 16:05:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.540 16:05:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.540 16:05:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:24.540 Found net devices under 0000:09:00.1: cvl_0_1 00:09:24.540 16:05:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:24.540 16:05:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:24.540 16:05:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.540 16:05:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.540 16:05:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.540 16:05:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.540 16:05:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.540 16:05:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.540 16:05:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.540 16:05:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.540 16:05:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.540 16:05:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.540 16:05:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.540 16:05:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.540 16:05:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.540 16:05:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.540 16:05:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.540 16:05:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.540 16:05:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.540 16:05:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.540 16:05:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:09:24.540 00:09:24.540 --- 10.0.0.2 ping statistics --- 00:09:24.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.540 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:24.540 16:05:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:09:24.540 00:09:24.540 --- 10.0.0.1 ping statistics --- 00:09:24.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.540 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:24.540 16:05:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.540 16:05:25 -- nvmf/common.sh@411 -- # return 0 00:09:24.540 16:05:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:24.540 16:05:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.540 16:05:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:24.540 16:05:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.540 16:05:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:24.540 16:05:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:24.540 16:05:25 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:24.540 16:05:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:24.540 16:05:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:24.540 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:09:24.540 16:05:25 -- nvmf/common.sh@470 -- # nvmfpid=3341093 00:09:24.540 16:05:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:24.540 16:05:25 -- nvmf/common.sh@471 -- # waitforlisten 3341093 00:09:24.540 16:05:25 -- common/autotest_common.sh@817 -- # '[' -z 3341093 ']' 00:09:24.540 16:05:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.540 16:05:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:24.540 16:05:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.540 16:05:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:24.540 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:09:24.540 [2024-04-24 16:05:25.824817] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:24.540 [2024-04-24 16:05:25.824918] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.798 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.798 [2024-04-24 16:05:25.889109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.798 [2024-04-24 16:05:26.000254] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.798 [2024-04-24 16:05:26.000305] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.798 [2024-04-24 16:05:26.000336] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.798 [2024-04-24 16:05:26.000349] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.798 [2024-04-24 16:05:26.000361] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.798 [2024-04-24 16:05:26.000441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.798 [2024-04-24 16:05:26.000446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.056 16:05:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:25.056 16:05:26 -- common/autotest_common.sh@850 -- # return 0 00:09:25.056 16:05:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:25.056 16:05:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:25.056 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.056 16:05:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.056 16:05:26 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.056 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.056 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.056 [2024-04-24 16:05:26.144559] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.056 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.056 16:05:26 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.056 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.056 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.056 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.056 16:05:26 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.056 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.057 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.057 [2024-04-24 16:05:26.160844] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.057 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:25.057 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.057 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.057 NULL1 00:09:25.057 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.057 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.057 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.057 Delay0 00:09:25.057 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.057 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.057 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:25.057 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@28 -- # perf_pid=3341151 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:25.057 16:05:26 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.057 [2024-04-24 16:05:26.245483] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:26.955 16:05:28 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.955 16:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.955 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:09:27.213 Write completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 starting I/O failed: -6 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Write completed with error (sct=0, sc=8) 00:09:27.213 starting I/O failed: -6 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.213 Write completed with error (sct=0, sc=8) 00:09:27.213 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 [2024-04-24 16:05:28.288165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff460000c00 is same with the state(5) to be set 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 starting I/O failed: -6 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.214 Read completed with error (sct=0, sc=8) 00:09:27.214 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:27.215 Write completed with error (sct=0, sc=8) 00:09:27.215 Read completed with error (sct=0, sc=8) 00:09:28.147 [2024-04-24 16:05:29.261157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193d120 is same with the state(5) to be set 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Read completed with error (sct=0, sc=8) 00:09:28.147 Write completed with error (sct=0, sc=8) 00:09:28.148 [2024-04-24 16:05:29.290030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e880 is same with the state(5) to be set 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 [2024-04-24 16:05:29.291007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46000bf90 is same with the state(5) to be set 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 [2024-04-24 16:05:29.291177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46000c690 is same with the state(5) to be set 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Write completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 Read completed with error (sct=0, sc=8) 00:09:28.148 [2024-04-24 16:05:29.292000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191eba0 is same with the state(5) to be set 00:09:28.148 [2024-04-24 16:05:29.292437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193d120 (9): Bad file descriptor 00:09:28.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:28.148 16:05:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:28.148 16:05:29 -- target/delete_subsystem.sh@34 -- # delay=0 00:09:28.148 16:05:29 -- target/delete_subsystem.sh@35 -- # kill -0 3341151 00:09:28.148 16:05:29 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:28.148 Initializing NVMe Controllers 00:09:28.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.148 Controller IO queue size 128, less than required. 00:09:28.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:28.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:28.148 Initialization complete. Launching workers. 00:09:28.148 ======================================================== 00:09:28.148 Latency(us) 00:09:28.148 Device Information : IOPS MiB/s Average min max 00:09:28.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.59 0.09 906943.75 527.28 1011477.16 00:09:28.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.75 0.08 904419.66 669.86 1012937.33 00:09:28.148 ======================================================== 00:09:28.148 Total : 352.34 0.17 905756.36 527.28 1012937.33 00:09:28.148 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@35 -- # kill -0 3341151 00:09:28.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3341151) - No such process 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@45 -- # NOT wait 3341151 00:09:28.714 16:05:29 -- common/autotest_common.sh@638 -- # local es=0 00:09:28.714 16:05:29 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3341151 00:09:28.714 16:05:29 -- common/autotest_common.sh@626 -- # local arg=wait 00:09:28.714 16:05:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:28.714 16:05:29 -- common/autotest_common.sh@630 -- # type -t wait 00:09:28.714 16:05:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:28.714 16:05:29 -- common/autotest_common.sh@641 -- # wait 3341151 00:09:28.714 16:05:29 -- common/autotest_common.sh@641 -- # es=1 00:09:28.714 16:05:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:28.714 16:05:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:28.714 16:05:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.714 16:05:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:28.714 16:05:29 -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 16:05:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.714 16:05:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:28.714 16:05:29 -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 [2024-04-24 16:05:29.811089] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.714 16:05:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.714 16:05:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:28.714 16:05:29 -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 16:05:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@54 -- # perf_pid=3341559 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@56 -- # delay=0 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:28.714 16:05:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.714 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.714 [2024-04-24 16:05:29.867594] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:29.280 16:05:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.280 16:05:30 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:29.280 16:05:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.845 16:05:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.845 16:05:30 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:29.845 16:05:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.102 16:05:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.102 16:05:31 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:30.103 16:05:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.668 16:05:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.668 16:05:31 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:30.668 16:05:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.233 16:05:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.233 16:05:32 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:31.233 16:05:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.840 16:05:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.840 16:05:32 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:31.840 16:05:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.840 Initializing NVMe Controllers 00:09:31.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.840 Controller IO queue size 128, less than required. 00:09:31.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:31.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:31.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:31.840 Initialization complete. Launching workers. 00:09:31.840 ======================================================== 00:09:31.840 Latency(us) 00:09:31.840 Device Information : IOPS MiB/s Average min max 00:09:31.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003876.10 1000193.91 1012426.01 00:09:31.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005331.20 1000252.57 1041780.68 00:09:31.840 ======================================================== 00:09:31.840 Total : 256.00 0.12 1004603.65 1000193.91 1041780.68 00:09:31.840 00:09:32.098 16:05:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.098 16:05:33 -- target/delete_subsystem.sh@57 -- # kill -0 3341559 00:09:32.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3341559) - No such process 00:09:32.098 16:05:33 -- target/delete_subsystem.sh@67 -- # wait 3341559 00:09:32.098 16:05:33 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:32.098 16:05:33 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:32.099 16:05:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:32.099 16:05:33 -- nvmf/common.sh@117 -- # sync 00:09:32.099 16:05:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.099 16:05:33 -- nvmf/common.sh@120 -- # set +e 00:09:32.099 16:05:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.099 16:05:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.099 rmmod nvme_tcp 00:09:32.099 rmmod nvme_fabrics 00:09:32.099 rmmod nvme_keyring 00:09:32.357 16:05:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.357 16:05:33 -- nvmf/common.sh@124 -- # set -e 00:09:32.357 16:05:33 -- nvmf/common.sh@125 -- # return 0 00:09:32.357 16:05:33 -- nvmf/common.sh@478 -- # '[' -n 3341093 ']' 00:09:32.357 16:05:33 -- nvmf/common.sh@479 -- # killprocess 3341093 00:09:32.357 16:05:33 -- common/autotest_common.sh@936 -- # '[' -z 3341093 ']' 00:09:32.357 16:05:33 -- common/autotest_common.sh@940 -- # kill -0 3341093 00:09:32.357 16:05:33 -- common/autotest_common.sh@941 -- # uname 00:09:32.357 16:05:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:32.357 16:05:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3341093 00:09:32.357 16:05:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:32.357 16:05:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:32.357 16:05:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3341093' 00:09:32.357 killing process with pid 3341093 00:09:32.357 16:05:33 -- common/autotest_common.sh@955 -- # kill 3341093 00:09:32.357 16:05:33 -- common/autotest_common.sh@960 -- # wait 3341093 00:09:32.616 16:05:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:32.617 16:05:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:32.617 16:05:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:32.617 16:05:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.617 16:05:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.617 16:05:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.617 16:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.617 16:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.574 16:05:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.574 00:09:34.574 real 0m12.241s 00:09:34.574 user 0m27.535s 00:09:34.574 sys 0m2.921s 00:09:34.574 16:05:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:34.574 16:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:34.574 ************************************ 00:09:34.574 END TEST nvmf_delete_subsystem 00:09:34.574 ************************************ 00:09:34.574 16:05:35 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:34.574 16:05:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:34.574 16:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.574 16:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:34.833 ************************************ 00:09:34.833 START TEST nvmf_ns_masking 00:09:34.833 ************************************ 00:09:34.833 16:05:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:34.833 * Looking for test storage... 00:09:34.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.833 16:05:35 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.833 16:05:35 -- nvmf/common.sh@7 -- # uname -s 00:09:34.833 16:05:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.833 16:05:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.833 16:05:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.833 16:05:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.833 16:05:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.833 16:05:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.833 16:05:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.833 16:05:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.833 16:05:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.833 16:05:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.833 16:05:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:34.833 16:05:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:34.833 16:05:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.833 16:05:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.833 16:05:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.833 16:05:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.833 16:05:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.833 16:05:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.833 16:05:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.833 16:05:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.833 16:05:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.833 16:05:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.833 16:05:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.833 16:05:35 -- paths/export.sh@5 -- # export PATH 00:09:34.833 16:05:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.833 16:05:35 -- nvmf/common.sh@47 -- # : 0 00:09:34.833 16:05:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.833 16:05:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.833 16:05:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.833 16:05:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.833 16:05:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.833 16:05:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.833 16:05:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.833 16:05:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.833 16:05:35 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.833 16:05:35 -- target/ns_masking.sh@11 -- # loops=5 00:09:34.833 16:05:35 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:34.833 16:05:35 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:34.833 16:05:35 -- target/ns_masking.sh@15 -- # uuidgen 00:09:34.833 16:05:35 -- target/ns_masking.sh@15 -- # HOSTID=e63a6e0a-1b3f-450b-aee5-042cb5c925c0 00:09:34.833 16:05:35 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:34.833 16:05:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:34.833 16:05:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.833 16:05:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:34.833 16:05:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:34.833 16:05:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:34.833 16:05:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.833 16:05:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.833 16:05:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.833 16:05:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:34.833 16:05:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:34.833 16:05:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.833 16:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 16:05:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:36.738 16:05:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.738 16:05:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.738 16:05:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.738 16:05:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.738 16:05:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.738 16:05:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.738 16:05:37 -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.738 16:05:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.738 16:05:37 -- nvmf/common.sh@296 -- # e810=() 00:09:36.738 16:05:37 -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.738 16:05:37 -- nvmf/common.sh@297 -- # x722=() 00:09:36.738 16:05:37 -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.738 16:05:37 -- nvmf/common.sh@298 -- # mlx=() 00:09:36.738 16:05:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.738 16:05:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.738 16:05:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.738 16:05:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.738 16:05:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.738 16:05:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:36.738 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:36.738 16:05:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.738 16:05:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:36.738 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:36.738 16:05:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.738 16:05:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.738 16:05:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.738 16:05:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:36.738 Found net devices under 0000:09:00.0: cvl_0_0 00:09:36.738 16:05:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.738 16:05:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.738 16:05:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.738 16:05:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.738 16:05:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:36.738 Found net devices under 0000:09:00.1: cvl_0_1 00:09:36.738 16:05:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.738 16:05:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:36.738 16:05:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:36.738 16:05:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:36.738 16:05:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.738 16:05:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.738 16:05:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.738 16:05:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.738 16:05:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.738 16:05:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.738 16:05:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.738 16:05:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.738 16:05:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.738 16:05:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.738 16:05:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.738 16:05:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.738 16:05:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.738 16:05:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.738 16:05:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.738 16:05:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.738 16:05:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.996 16:05:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.996 16:05:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.996 16:05:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:36.996 00:09:36.996 --- 10.0.0.2 ping statistics --- 00:09:36.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.996 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:36.996 16:05:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:36.996 00:09:36.996 --- 10.0.0.1 ping statistics --- 00:09:36.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.996 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:36.996 16:05:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.996 16:05:38 -- nvmf/common.sh@411 -- # return 0 00:09:36.996 16:05:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:36.996 16:05:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.996 16:05:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:36.996 16:05:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:36.996 16:05:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.996 16:05:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:36.996 16:05:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:36.996 16:05:38 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:36.996 16:05:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:36.996 16:05:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:36.996 16:05:38 -- common/autotest_common.sh@10 -- # set +x 00:09:36.996 16:05:38 -- nvmf/common.sh@470 -- # nvmfpid=3343911 00:09:36.996 16:05:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.996 16:05:38 -- nvmf/common.sh@471 -- # waitforlisten 3343911 00:09:36.996 16:05:38 -- common/autotest_common.sh@817 -- # '[' -z 3343911 ']' 00:09:36.996 16:05:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.996 16:05:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:36.996 16:05:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.996 16:05:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:36.996 16:05:38 -- common/autotest_common.sh@10 -- # set +x 00:09:36.996 [2024-04-24 16:05:38.130821] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:36.996 [2024-04-24 16:05:38.130899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.996 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.996 [2024-04-24 16:05:38.201401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.257 [2024-04-24 16:05:38.323311] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.257 [2024-04-24 16:05:38.323383] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.257 [2024-04-24 16:05:38.323400] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.257 [2024-04-24 16:05:38.323414] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.257 [2024-04-24 16:05:38.323425] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.257 [2024-04-24 16:05:38.323527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.257 [2024-04-24 16:05:38.323594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.257 [2024-04-24 16:05:38.323656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.257 [2024-04-24 16:05:38.323660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.823 16:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:37.823 16:05:39 -- common/autotest_common.sh@850 -- # return 0 00:09:37.823 16:05:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:37.823 16:05:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:37.823 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 16:05:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.823 16:05:39 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.082 [2024-04-24 16:05:39.296504] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.082 16:05:39 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:38.082 16:05:39 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:38.082 16:05:39 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:38.340 Malloc1 00:09:38.340 16:05:39 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:38.598 Malloc2 00:09:38.598 16:05:39 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.856 16:05:40 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:39.113 16:05:40 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.371 [2024-04-24 16:05:40.584262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.371 16:05:40 -- target/ns_masking.sh@61 -- # connect 00:09:39.371 16:05:40 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e63a6e0a-1b3f-450b-aee5-042cb5c925c0 -a 10.0.0.2 -s 4420 -i 4 00:09:39.631 16:05:40 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.631 16:05:40 -- common/autotest_common.sh@1184 -- # local i=0 00:09:39.631 16:05:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.631 16:05:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:39.631 16:05:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:41.535 16:05:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:41.535 16:05:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:41.535 16:05:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.535 16:05:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:41.535 16:05:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.535 16:05:42 -- common/autotest_common.sh@1194 -- # return 0 00:09:41.535 16:05:42 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:41.535 16:05:42 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:41.535 16:05:42 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:41.535 16:05:42 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:41.535 16:05:42 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:41.535 16:05:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:41.535 16:05:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:41.535 [ 0]:0x1 00:09:41.535 16:05:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.535 16:05:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:41.819 16:05:42 -- target/ns_masking.sh@40 -- # nguid=ae50e9fa5a2a4c99a0b93600ff6b387c 00:09:41.819 16:05:42 -- target/ns_masking.sh@41 -- # [[ ae50e9fa5a2a4c99a0b93600ff6b387c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.819 16:05:42 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:42.080 16:05:43 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:42.080 16:05:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:42.080 16:05:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:42.080 [ 0]:0x1 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # nguid=ae50e9fa5a2a4c99a0b93600ff6b387c 00:09:42.080 16:05:43 -- target/ns_masking.sh@41 -- # [[ ae50e9fa5a2a4c99a0b93600ff6b387c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:42.080 16:05:43 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:42.080 16:05:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:42.080 16:05:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:42.080 [ 1]:0x2 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:42.080 16:05:43 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:42.080 16:05:43 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:42.080 16:05:43 -- target/ns_masking.sh@69 -- # disconnect 00:09:42.080 16:05:43 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.080 16:05:43 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.338 16:05:43 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:42.596 16:05:43 -- target/ns_masking.sh@77 -- # connect 1 00:09:42.596 16:05:43 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e63a6e0a-1b3f-450b-aee5-042cb5c925c0 -a 10.0.0.2 -s 4420 -i 4 00:09:42.596 16:05:43 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:42.596 16:05:43 -- common/autotest_common.sh@1184 -- # local i=0 00:09:42.596 16:05:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.596 16:05:43 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:42.596 16:05:43 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:42.596 16:05:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:45.134 16:05:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:45.134 16:05:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:45.134 16:05:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.134 16:05:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:45.134 16:05:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.134 16:05:45 -- common/autotest_common.sh@1194 -- # return 0 00:09:45.134 16:05:45 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:45.134 16:05:45 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:45.134 16:05:45 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:45.134 16:05:45 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:45.134 16:05:45 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:45.134 16:05:45 -- common/autotest_common.sh@638 -- # local es=0 00:09:45.134 16:05:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:45.135 16:05:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:45.135 16:05:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.135 16:05:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:45.135 16:05:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.135 16:05:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:45.135 16:05:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.135 16:05:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:45.135 16:05:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:45.135 16:05:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:45.135 16:05:46 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.135 16:05:46 -- common/autotest_common.sh@641 -- # es=1 00:09:45.135 16:05:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:45.135 16:05:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:45.135 16:05:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:45.135 16:05:46 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:45.135 [ 0]:0x2 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:45.135 16:05:46 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.135 16:05:46 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:45.135 16:05:46 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:45.135 [ 0]:0x1 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nguid=ae50e9fa5a2a4c99a0b93600ff6b387c 00:09:45.135 16:05:46 -- target/ns_masking.sh@41 -- # [[ ae50e9fa5a2a4c99a0b93600ff6b387c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.135 16:05:46 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.135 16:05:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:45.135 [ 1]:0x2 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:45.135 16:05:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.395 16:05:46 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:45.395 16:05:46 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.395 16:05:46 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:45.655 16:05:46 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:45.655 16:05:46 -- common/autotest_common.sh@638 -- # local es=0 00:09:45.655 16:05:46 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:45.655 16:05:46 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:45.655 16:05:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.655 16:05:46 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:45.655 16:05:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.655 16:05:46 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:45.655 16:05:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.655 16:05:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:45.655 16:05:46 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.655 16:05:46 -- common/autotest_common.sh@641 -- # es=1 00:09:45.655 16:05:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:45.655 16:05:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:45.655 16:05:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:45.655 16:05:46 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:45.655 16:05:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:45.655 16:05:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:45.655 [ 0]:0x2 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:45.655 16:05:46 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:45.655 16:05:46 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.655 16:05:46 -- target/ns_masking.sh@91 -- # disconnect 00:09:45.655 16:05:46 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.655 16:05:46 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:45.915 16:05:47 -- target/ns_masking.sh@95 -- # connect 2 00:09:45.915 16:05:47 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e63a6e0a-1b3f-450b-aee5-042cb5c925c0 -a 10.0.0.2 -s 4420 -i 4 00:09:46.175 16:05:47 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:46.175 16:05:47 -- common/autotest_common.sh@1184 -- # local i=0 00:09:46.175 16:05:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.175 16:05:47 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:46.175 16:05:47 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:46.175 16:05:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:48.082 16:05:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:48.082 16:05:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:48.082 16:05:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.082 16:05:49 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:48.082 16:05:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.082 16:05:49 -- common/autotest_common.sh@1194 -- # return 0 00:09:48.082 16:05:49 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:48.082 16:05:49 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:48.340 16:05:49 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:48.340 16:05:49 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:48.340 16:05:49 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:48.340 16:05:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.340 16:05:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.340 [ 0]:0x1 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # nguid=ae50e9fa5a2a4c99a0b93600ff6b387c 00:09:48.340 16:05:49 -- target/ns_masking.sh@41 -- # [[ ae50e9fa5a2a4c99a0b93600ff6b387c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.340 16:05:49 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:48.340 16:05:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.340 16:05:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.340 [ 1]:0x2 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.340 16:05:49 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:48.340 16:05:49 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.340 16:05:49 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:48.596 16:05:49 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:48.596 16:05:49 -- common/autotest_common.sh@638 -- # local es=0 00:09:48.596 16:05:49 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:48.596 16:05:49 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:48.596 16:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.596 16:05:49 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:48.596 16:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.596 16:05:49 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:48.596 16:05:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.596 16:05:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.596 16:05:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.596 16:05:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.596 16:05:49 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:48.596 16:05:49 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.596 16:05:49 -- common/autotest_common.sh@641 -- # es=1 00:09:48.596 16:05:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:48.596 16:05:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:48.596 16:05:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:48.596 16:05:49 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:48.596 16:05:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.596 16:05:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.596 [ 0]:0x2 00:09:48.596 16:05:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.596 16:05:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.853 16:05:49 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:48.853 16:05:49 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.853 16:05:49 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:48.853 16:05:49 -- common/autotest_common.sh@638 -- # local es=0 00:09:48.853 16:05:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:48.853 16:05:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.853 16:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.853 16:05:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.853 16:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.853 16:05:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.853 16:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.853 16:05:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.853 16:05:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:48.853 16:05:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:49.110 [2024-04-24 16:05:50.142902] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:49.110 request: 00:09:49.110 { 00:09:49.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.110 "nsid": 2, 00:09:49.110 "host": "nqn.2016-06.io.spdk:host1", 00:09:49.110 "method": "nvmf_ns_remove_host", 00:09:49.110 "req_id": 1 00:09:49.110 } 00:09:49.110 Got JSON-RPC error response 00:09:49.110 response: 00:09:49.110 { 00:09:49.110 "code": -32602, 00:09:49.110 "message": "Invalid parameters" 00:09:49.110 } 00:09:49.110 16:05:50 -- common/autotest_common.sh@641 -- # es=1 00:09:49.110 16:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:49.110 16:05:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:49.110 16:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:49.110 16:05:50 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:49.110 16:05:50 -- common/autotest_common.sh@638 -- # local es=0 00:09:49.110 16:05:50 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:49.110 16:05:50 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:49.110 16:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:49.110 16:05:50 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:49.110 16:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:49.110 16:05:50 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:49.110 16:05:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:49.110 16:05:50 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:49.110 16:05:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.110 16:05:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:49.110 16:05:50 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:49.110 16:05:50 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.110 16:05:50 -- common/autotest_common.sh@641 -- # es=1 00:09:49.110 16:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:49.110 16:05:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:49.110 16:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:49.111 16:05:50 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:49.111 16:05:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:49.111 16:05:50 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:49.111 [ 0]:0x2 00:09:49.111 16:05:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.111 16:05:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:49.111 16:05:50 -- target/ns_masking.sh@40 -- # nguid=7cd358ecf9604504b04ba2efd64a31ef 00:09:49.111 16:05:50 -- target/ns_masking.sh@41 -- # [[ 7cd358ecf9604504b04ba2efd64a31ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.111 16:05:50 -- target/ns_masking.sh@108 -- # disconnect 00:09:49.111 16:05:50 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.111 16:05:50 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.369 16:05:50 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:49.369 16:05:50 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:49.369 16:05:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:49.369 16:05:50 -- nvmf/common.sh@117 -- # sync 00:09:49.369 16:05:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.369 16:05:50 -- nvmf/common.sh@120 -- # set +e 00:09:49.369 16:05:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.369 16:05:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.369 rmmod nvme_tcp 00:09:49.369 rmmod nvme_fabrics 00:09:49.369 rmmod nvme_keyring 00:09:49.369 16:05:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.369 16:05:50 -- nvmf/common.sh@124 -- # set -e 00:09:49.369 16:05:50 -- nvmf/common.sh@125 -- # return 0 00:09:49.369 16:05:50 -- nvmf/common.sh@478 -- # '[' -n 3343911 ']' 00:09:49.369 16:05:50 -- nvmf/common.sh@479 -- # killprocess 3343911 00:09:49.369 16:05:50 -- common/autotest_common.sh@936 -- # '[' -z 3343911 ']' 00:09:49.369 16:05:50 -- common/autotest_common.sh@940 -- # kill -0 3343911 00:09:49.369 16:05:50 -- common/autotest_common.sh@941 -- # uname 00:09:49.369 16:05:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:49.369 16:05:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3343911 00:09:49.369 16:05:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:49.369 16:05:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:49.369 16:05:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3343911' 00:09:49.369 killing process with pid 3343911 00:09:49.369 16:05:50 -- common/autotest_common.sh@955 -- # kill 3343911 00:09:49.369 16:05:50 -- common/autotest_common.sh@960 -- # wait 3343911 00:09:49.938 16:05:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:49.938 16:05:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:49.938 16:05:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:49.938 16:05:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.938 16:05:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.938 16:05:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.938 16:05:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.938 16:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.847 16:05:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.847 00:09:51.847 real 0m17.121s 00:09:51.847 user 0m53.746s 00:09:51.847 sys 0m3.709s 00:09:51.847 16:05:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.847 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:51.847 ************************************ 00:09:51.847 END TEST nvmf_ns_masking 00:09:51.847 ************************************ 00:09:51.847 16:05:53 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:51.847 16:05:53 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:51.847 16:05:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:51.847 16:05:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.847 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:09:51.847 ************************************ 00:09:51.847 START TEST nvmf_nvme_cli 00:09:51.847 ************************************ 00:09:51.847 16:05:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:52.105 * Looking for test storage... 00:09:52.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.105 16:05:53 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.105 16:05:53 -- nvmf/common.sh@7 -- # uname -s 00:09:52.105 16:05:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.105 16:05:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.105 16:05:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.105 16:05:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.105 16:05:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.105 16:05:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.105 16:05:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.105 16:05:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.105 16:05:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.105 16:05:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.105 16:05:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:52.106 16:05:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:52.106 16:05:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.106 16:05:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.106 16:05:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.106 16:05:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.106 16:05:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.106 16:05:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.106 16:05:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.106 16:05:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.106 16:05:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.106 16:05:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.106 16:05:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.106 16:05:53 -- paths/export.sh@5 -- # export PATH 00:09:52.106 16:05:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.106 16:05:53 -- nvmf/common.sh@47 -- # : 0 00:09:52.106 16:05:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.106 16:05:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.106 16:05:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.106 16:05:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.106 16:05:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.106 16:05:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.106 16:05:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.106 16:05:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.106 16:05:53 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.106 16:05:53 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.106 16:05:53 -- target/nvme_cli.sh@14 -- # devs=() 00:09:52.106 16:05:53 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:52.106 16:05:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:52.106 16:05:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.106 16:05:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:52.106 16:05:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:52.106 16:05:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:52.106 16:05:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.106 16:05:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.106 16:05:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.106 16:05:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:52.106 16:05:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:52.106 16:05:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.106 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 16:05:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:54.013 16:05:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.013 16:05:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.013 16:05:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.013 16:05:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.013 16:05:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.013 16:05:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.013 16:05:55 -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.013 16:05:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.013 16:05:55 -- nvmf/common.sh@296 -- # e810=() 00:09:54.013 16:05:55 -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.013 16:05:55 -- nvmf/common.sh@297 -- # x722=() 00:09:54.013 16:05:55 -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.013 16:05:55 -- nvmf/common.sh@298 -- # mlx=() 00:09:54.013 16:05:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.013 16:05:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.013 16:05:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.013 16:05:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.013 16:05:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.013 16:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:54.013 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:54.013 16:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.013 16:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:54.013 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:54.013 16:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.013 16:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.013 16:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.013 16:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:54.013 Found net devices under 0000:09:00.0: cvl_0_0 00:09:54.013 16:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.013 16:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.013 16:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.013 16:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.013 16:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:54.013 Found net devices under 0000:09:00.1: cvl_0_1 00:09:54.013 16:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.013 16:05:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:54.013 16:05:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:54.013 16:05:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:54.013 16:05:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.013 16:05:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.013 16:05:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.013 16:05:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.013 16:05:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.013 16:05:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.013 16:05:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.013 16:05:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.013 16:05:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.013 16:05:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.013 16:05:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.013 16:05:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.013 16:05:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.013 16:05:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.013 16:05:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.013 16:05:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.013 16:05:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.271 16:05:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.271 16:05:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.271 16:05:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:09:54.271 00:09:54.271 --- 10.0.0.2 ping statistics --- 00:09:54.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.271 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:09:54.271 16:05:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:09:54.271 00:09:54.271 --- 10.0.0.1 ping statistics --- 00:09:54.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.271 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:54.271 16:05:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.271 16:05:55 -- nvmf/common.sh@411 -- # return 0 00:09:54.271 16:05:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:54.271 16:05:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.271 16:05:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:54.271 16:05:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:54.271 16:05:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.271 16:05:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:54.271 16:05:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:54.271 16:05:55 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:54.271 16:05:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:54.271 16:05:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:54.271 16:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:54.271 16:05:55 -- nvmf/common.sh@470 -- # nvmfpid=3347590 00:09:54.271 16:05:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.271 16:05:55 -- nvmf/common.sh@471 -- # waitforlisten 3347590 00:09:54.271 16:05:55 -- common/autotest_common.sh@817 -- # '[' -z 3347590 ']' 00:09:54.271 16:05:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.271 16:05:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:54.271 16:05:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.271 16:05:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:54.271 16:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:54.271 [2024-04-24 16:05:55.415442] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:09:54.271 [2024-04-24 16:05:55.415525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.271 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.271 [2024-04-24 16:05:55.484882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.531 [2024-04-24 16:05:55.604581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.531 [2024-04-24 16:05:55.604652] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.531 [2024-04-24 16:05:55.604669] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.531 [2024-04-24 16:05:55.604682] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.531 [2024-04-24 16:05:55.604694] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.531 [2024-04-24 16:05:55.604784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.531 [2024-04-24 16:05:55.604859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.531 [2024-04-24 16:05:55.604827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.531 [2024-04-24 16:05:55.604865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.098 16:05:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.098 16:05:56 -- common/autotest_common.sh@850 -- # return 0 00:09:55.098 16:05:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:55.098 16:05:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:55.098 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.098 16:05:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.098 16:05:56 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.098 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.098 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.098 [2024-04-24 16:05:56.370590] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.098 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.098 16:05:56 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.098 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.098 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 Malloc0 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 Malloc1 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 [2024-04-24 16:05:56.452520] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:55.360 16:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.360 16:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 16:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.360 16:05:56 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:55.360 00:09:55.360 Discovery Log Number of Records 2, Generation counter 2 00:09:55.360 =====Discovery Log Entry 0====== 00:09:55.360 trtype: tcp 00:09:55.360 adrfam: ipv4 00:09:55.360 subtype: current discovery subsystem 00:09:55.360 treq: not required 00:09:55.360 portid: 0 00:09:55.360 trsvcid: 4420 00:09:55.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:55.360 traddr: 10.0.0.2 00:09:55.360 eflags: explicit discovery connections, duplicate discovery information 00:09:55.360 sectype: none 00:09:55.360 =====Discovery Log Entry 1====== 00:09:55.360 trtype: tcp 00:09:55.360 adrfam: ipv4 00:09:55.360 subtype: nvme subsystem 00:09:55.360 treq: not required 00:09:55.360 portid: 0 00:09:55.360 trsvcid: 4420 00:09:55.360 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:55.360 traddr: 10.0.0.2 00:09:55.360 eflags: none 00:09:55.360 sectype: none 00:09:55.360 16:05:56 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:55.360 16:05:56 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:55.360 16:05:56 -- nvmf/common.sh@511 -- # local dev _ 00:09:55.360 16:05:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:55.360 16:05:56 -- nvmf/common.sh@510 -- # nvme list 00:09:55.360 16:05:56 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:55.360 16:05:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:55.360 16:05:56 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:55.360 16:05:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:55.360 16:05:56 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:55.360 16:05:56 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.962 16:05:57 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:55.963 16:05:57 -- common/autotest_common.sh@1184 -- # local i=0 00:09:55.963 16:05:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.963 16:05:57 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:55.963 16:05:57 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:55.963 16:05:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:58.502 16:05:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:58.502 16:05:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:58.502 16:05:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.502 16:05:59 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:58.502 16:05:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.502 16:05:59 -- common/autotest_common.sh@1194 -- # return 0 00:09:58.502 16:05:59 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:58.502 16:05:59 -- nvmf/common.sh@511 -- # local dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@510 -- # nvme list 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:58.502 /dev/nvme0n1 ]] 00:09:58.502 16:05:59 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:58.502 16:05:59 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:58.502 16:05:59 -- nvmf/common.sh@511 -- # local dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@510 -- # nvme list 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:58.502 16:05:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.502 16:05:59 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:58.502 16:05:59 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.502 16:05:59 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.502 16:05:59 -- common/autotest_common.sh@1205 -- # local i=0 00:09:58.502 16:05:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:58.502 16:05:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.502 16:05:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:58.502 16:05:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.502 16:05:59 -- common/autotest_common.sh@1217 -- # return 0 00:09:58.502 16:05:59 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:58.502 16:05:59 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.502 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.502 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:09:58.502 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.502 16:05:59 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:58.502 16:05:59 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:58.502 16:05:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:58.502 16:05:59 -- nvmf/common.sh@117 -- # sync 00:09:58.502 16:05:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.502 16:05:59 -- nvmf/common.sh@120 -- # set +e 00:09:58.502 16:05:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.502 16:05:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.502 rmmod nvme_tcp 00:09:58.502 rmmod nvme_fabrics 00:09:58.502 rmmod nvme_keyring 00:09:58.502 16:05:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.502 16:05:59 -- nvmf/common.sh@124 -- # set -e 00:09:58.502 16:05:59 -- nvmf/common.sh@125 -- # return 0 00:09:58.502 16:05:59 -- nvmf/common.sh@478 -- # '[' -n 3347590 ']' 00:09:58.502 16:05:59 -- nvmf/common.sh@479 -- # killprocess 3347590 00:09:58.502 16:05:59 -- common/autotest_common.sh@936 -- # '[' -z 3347590 ']' 00:09:58.502 16:05:59 -- common/autotest_common.sh@940 -- # kill -0 3347590 00:09:58.502 16:05:59 -- common/autotest_common.sh@941 -- # uname 00:09:58.502 16:05:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.502 16:05:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347590 00:09:58.502 16:05:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:58.502 16:05:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:58.502 16:05:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347590' 00:09:58.502 killing process with pid 3347590 00:09:58.502 16:05:59 -- common/autotest_common.sh@955 -- # kill 3347590 00:09:58.502 16:05:59 -- common/autotest_common.sh@960 -- # wait 3347590 00:09:58.502 16:05:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:58.502 16:05:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:58.502 16:05:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.502 16:05:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.502 16:05:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.502 16:05:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.503 16:05:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.040 16:06:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.040 00:10:01.040 real 0m8.694s 00:10:01.040 user 0m17.072s 00:10:01.040 sys 0m2.191s 00:10:01.040 16:06:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:01.040 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:01.040 ************************************ 00:10:01.040 END TEST nvmf_nvme_cli 00:10:01.040 ************************************ 00:10:01.040 16:06:01 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:01.040 16:06:01 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:01.040 16:06:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:01.040 16:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.040 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:01.040 ************************************ 00:10:01.040 START TEST nvmf_vfio_user 00:10:01.040 ************************************ 00:10:01.040 16:06:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:01.040 * Looking for test storage... 00:10:01.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.040 16:06:01 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.040 16:06:01 -- nvmf/common.sh@7 -- # uname -s 00:10:01.040 16:06:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.040 16:06:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.040 16:06:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.040 16:06:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.040 16:06:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.040 16:06:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.040 16:06:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.040 16:06:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.040 16:06:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.040 16:06:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.040 16:06:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:01.040 16:06:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:01.040 16:06:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.040 16:06:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.040 16:06:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.040 16:06:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.041 16:06:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.041 16:06:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.041 16:06:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.041 16:06:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.041 16:06:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.041 16:06:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.041 16:06:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.041 16:06:01 -- paths/export.sh@5 -- # export PATH 00:10:01.041 16:06:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.041 16:06:01 -- nvmf/common.sh@47 -- # : 0 00:10:01.041 16:06:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.041 16:06:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.041 16:06:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.041 16:06:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.041 16:06:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.041 16:06:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.041 16:06:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.041 16:06:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3348527 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3348527' 00:10:01.041 Process pid: 3348527 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:01.041 16:06:02 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3348527 00:10:01.041 16:06:02 -- common/autotest_common.sh@817 -- # '[' -z 3348527 ']' 00:10:01.041 16:06:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.041 16:06:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:01.041 16:06:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.041 16:06:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:01.041 16:06:02 -- common/autotest_common.sh@10 -- # set +x 00:10:01.041 [2024-04-24 16:06:02.054442] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:10:01.041 [2024-04-24 16:06:02.054539] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.041 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.041 [2024-04-24 16:06:02.114838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.041 [2024-04-24 16:06:02.219747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.041 [2024-04-24 16:06:02.219804] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.041 [2024-04-24 16:06:02.219834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.041 [2024-04-24 16:06:02.219847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.041 [2024-04-24 16:06:02.219859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.041 [2024-04-24 16:06:02.219912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.041 [2024-04-24 16:06:02.219972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.041 [2024-04-24 16:06:02.220023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.041 [2024-04-24 16:06:02.220026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.301 16:06:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:01.301 16:06:02 -- common/autotest_common.sh@850 -- # return 0 00:10:01.301 16:06:02 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:02.237 16:06:03 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:02.495 16:06:03 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:02.495 16:06:03 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:02.495 16:06:03 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:02.495 16:06:03 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:02.495 16:06:03 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:02.754 Malloc1 00:10:02.754 16:06:03 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:03.013 16:06:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:03.272 16:06:04 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:03.530 16:06:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:03.530 16:06:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:03.530 16:06:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:03.787 Malloc2 00:10:03.788 16:06:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:04.046 16:06:05 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:04.304 16:06:05 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:04.593 16:06:05 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:04.593 [2024-04-24 16:06:05.752197] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:10:04.593 [2024-04-24 16:06:05.752244] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349063 ] 00:10:04.593 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.593 [2024-04-24 16:06:05.792307] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:04.593 [2024-04-24 16:06:05.802200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:04.593 [2024-04-24 16:06:05.802228] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1511816000 00:10:04.593 [2024-04-24 16:06:05.803196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.804187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.805213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.806201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.807201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.808212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.809220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.810227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.593 [2024-04-24 16:06:05.811236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:04.593 [2024-04-24 16:06:05.811260] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f151180b000 00:10:04.593 [2024-04-24 16:06:05.812411] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:04.593 [2024-04-24 16:06:05.826577] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:04.593 [2024-04-24 16:06:05.826611] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:04.593 [2024-04-24 16:06:05.835386] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:04.593 [2024-04-24 16:06:05.835444] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:04.593 [2024-04-24 16:06:05.835536] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:04.593 [2024-04-24 16:06:05.835566] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:04.593 [2024-04-24 16:06:05.835577] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:04.593 [2024-04-24 16:06:05.836377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:04.593 [2024-04-24 16:06:05.836397] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:04.593 [2024-04-24 16:06:05.836410] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:04.593 [2024-04-24 16:06:05.837380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:04.593 [2024-04-24 16:06:05.837406] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:04.593 [2024-04-24 16:06:05.837421] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:04.593 [2024-04-24 16:06:05.838386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:04.593 [2024-04-24 16:06:05.838405] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:04.593 [2024-04-24 16:06:05.839390] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:04.593 [2024-04-24 16:06:05.839409] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:04.593 [2024-04-24 16:06:05.839419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:04.593 [2024-04-24 16:06:05.839431] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:04.593 [2024-04-24 16:06:05.839541] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:04.593 [2024-04-24 16:06:05.839550] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:04.593 [2024-04-24 16:06:05.839559] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:04.593 [2024-04-24 16:06:05.840392] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:04.593 [2024-04-24 16:06:05.841398] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:04.594 [2024-04-24 16:06:05.842411] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:04.594 [2024-04-24 16:06:05.843405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:04.594 [2024-04-24 16:06:05.843518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:04.594 [2024-04-24 16:06:05.844418] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:04.594 [2024-04-24 16:06:05.844436] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:04.594 [2024-04-24 16:06:05.844446] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844471] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:04.594 [2024-04-24 16:06:05.844485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844510] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.594 [2024-04-24 16:06:05.844520] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.594 [2024-04-24 16:06:05.844539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.844595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.844612] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:04.594 [2024-04-24 16:06:05.844620] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:04.594 [2024-04-24 16:06:05.844629] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:04.594 [2024-04-24 16:06:05.844636] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:04.594 [2024-04-24 16:06:05.844645] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:04.594 [2024-04-24 16:06:05.844653] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:04.594 [2024-04-24 16:06:05.844661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.844703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.844739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.594 [2024-04-24 16:06:05.844762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.594 [2024-04-24 16:06:05.844776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.594 [2024-04-24 16:06:05.844788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.594 [2024-04-24 16:06:05.844797] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844817] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.844848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.844859] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:04.594 [2024-04-24 16:06:05.844868] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.844923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.844980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.844997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845011] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:04.594 [2024-04-24 16:06:05.845020] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:04.594 [2024-04-24 16:06:05.845031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.845078] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:04.594 [2024-04-24 16:06:05.845094] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845107] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845120] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.594 [2024-04-24 16:06:05.845129] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.594 [2024-04-24 16:06:05.845139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.845186] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845213] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.594 [2024-04-24 16:06:05.845222] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.594 [2024-04-24 16:06:05.845232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.845258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845270] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845284] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845294] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845302] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845311] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:04.594 [2024-04-24 16:06:05.845318] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:04.594 [2024-04-24 16:06:05.845331] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:04.594 [2024-04-24 16:06:05.845358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.845396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:04.594 [2024-04-24 16:06:05.845428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:04.594 [2024-04-24 16:06:05.845440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:04.595 [2024-04-24 16:06:05.845456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:04.595 [2024-04-24 16:06:05.845468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:04.595 [2024-04-24 16:06:05.845485] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:04.595 [2024-04-24 16:06:05.845495] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:04.595 [2024-04-24 16:06:05.845502] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:04.595 [2024-04-24 16:06:05.845508] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:04.595 [2024-04-24 16:06:05.845518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:04.595 [2024-04-24 16:06:05.845530] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:04.595 [2024-04-24 16:06:05.845539] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:04.595 [2024-04-24 16:06:05.845548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:04.595 [2024-04-24 16:06:05.845559] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:04.595 [2024-04-24 16:06:05.845568] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.595 [2024-04-24 16:06:05.845578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.595 [2024-04-24 16:06:05.845590] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:04.595 [2024-04-24 16:06:05.845598] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:04.595 [2024-04-24 16:06:05.845607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:04.595 [2024-04-24 16:06:05.845619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:04.595 [2024-04-24 16:06:05.845641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:04.595 [2024-04-24 16:06:05.845657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:04.595 [2024-04-24 16:06:05.845669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:04.595 ===================================================== 00:10:04.595 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:04.595 ===================================================== 00:10:04.595 Controller Capabilities/Features 00:10:04.595 ================================ 00:10:04.595 Vendor ID: 4e58 00:10:04.595 Subsystem Vendor ID: 4e58 00:10:04.595 Serial Number: SPDK1 00:10:04.595 Model Number: SPDK bdev Controller 00:10:04.595 Firmware Version: 24.05 00:10:04.595 Recommended Arb Burst: 6 00:10:04.595 IEEE OUI Identifier: 8d 6b 50 00:10:04.595 Multi-path I/O 00:10:04.595 May have multiple subsystem ports: Yes 00:10:04.595 May have multiple controllers: Yes 00:10:04.595 Associated with SR-IOV VF: No 00:10:04.595 Max Data Transfer Size: 131072 00:10:04.595 Max Number of Namespaces: 32 00:10:04.595 Max Number of I/O Queues: 127 00:10:04.595 NVMe Specification Version (VS): 1.3 00:10:04.595 NVMe Specification Version (Identify): 1.3 00:10:04.595 Maximum Queue Entries: 256 00:10:04.595 Contiguous Queues Required: Yes 00:10:04.595 Arbitration Mechanisms Supported 00:10:04.595 Weighted Round Robin: Not Supported 00:10:04.595 Vendor Specific: Not Supported 00:10:04.595 Reset Timeout: 15000 ms 00:10:04.595 Doorbell Stride: 4 bytes 00:10:04.595 NVM Subsystem Reset: Not Supported 00:10:04.595 Command Sets Supported 00:10:04.595 NVM Command Set: Supported 00:10:04.595 Boot Partition: Not Supported 00:10:04.595 Memory Page Size Minimum: 4096 bytes 00:10:04.595 Memory Page Size Maximum: 4096 bytes 00:10:04.595 Persistent Memory Region: Not Supported 00:10:04.595 Optional Asynchronous Events Supported 00:10:04.595 Namespace Attribute Notices: Supported 00:10:04.595 Firmware Activation Notices: Not Supported 00:10:04.595 ANA Change Notices: Not Supported 00:10:04.595 PLE Aggregate Log Change Notices: Not Supported 00:10:04.595 LBA Status Info Alert Notices: Not Supported 00:10:04.595 EGE Aggregate Log Change Notices: Not Supported 00:10:04.595 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.595 Zone Descriptor Change Notices: Not Supported 00:10:04.595 Discovery Log Change Notices: Not Supported 00:10:04.595 Controller Attributes 00:10:04.595 128-bit Host Identifier: Supported 00:10:04.595 Non-Operational Permissive Mode: Not Supported 00:10:04.595 NVM Sets: Not Supported 00:10:04.595 Read Recovery Levels: Not Supported 00:10:04.595 Endurance Groups: Not Supported 00:10:04.595 Predictable Latency Mode: Not Supported 00:10:04.595 Traffic Based Keep ALive: Not Supported 00:10:04.595 Namespace Granularity: Not Supported 00:10:04.595 SQ Associations: Not Supported 00:10:04.595 UUID List: Not Supported 00:10:04.595 Multi-Domain Subsystem: Not Supported 00:10:04.595 Fixed Capacity Management: Not Supported 00:10:04.595 Variable Capacity Management: Not Supported 00:10:04.595 Delete Endurance Group: Not Supported 00:10:04.595 Delete NVM Set: Not Supported 00:10:04.595 Extended LBA Formats Supported: Not Supported 00:10:04.595 Flexible Data Placement Supported: Not Supported 00:10:04.595 00:10:04.595 Controller Memory Buffer Support 00:10:04.595 ================================ 00:10:04.595 Supported: No 00:10:04.595 00:10:04.595 Persistent Memory Region Support 00:10:04.595 ================================ 00:10:04.595 Supported: No 00:10:04.595 00:10:04.595 Admin Command Set Attributes 00:10:04.595 ============================ 00:10:04.595 Security Send/Receive: Not Supported 00:10:04.595 Format NVM: Not Supported 00:10:04.595 Firmware Activate/Download: Not Supported 00:10:04.595 Namespace Management: Not Supported 00:10:04.595 Device Self-Test: Not Supported 00:10:04.595 Directives: Not Supported 00:10:04.595 NVMe-MI: Not Supported 00:10:04.595 Virtualization Management: Not Supported 00:10:04.595 Doorbell Buffer Config: Not Supported 00:10:04.595 Get LBA Status Capability: Not Supported 00:10:04.595 Command & Feature Lockdown Capability: Not Supported 00:10:04.595 Abort Command Limit: 4 00:10:04.595 Async Event Request Limit: 4 00:10:04.595 Number of Firmware Slots: N/A 00:10:04.595 Firmware Slot 1 Read-Only: N/A 00:10:04.595 Firmware Activation Without Reset: N/A 00:10:04.595 Multiple Update Detection Support: N/A 00:10:04.595 Firmware Update Granularity: No Information Provided 00:10:04.595 Per-Namespace SMART Log: No 00:10:04.595 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.595 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:04.595 Command Effects Log Page: Supported 00:10:04.595 Get Log Page Extended Data: Supported 00:10:04.595 Telemetry Log Pages: Not Supported 00:10:04.595 Persistent Event Log Pages: Not Supported 00:10:04.595 Supported Log Pages Log Page: May Support 00:10:04.595 Commands Supported & Effects Log Page: Not Supported 00:10:04.595 Feature Identifiers & Effects Log Page:May Support 00:10:04.595 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.595 Data Area 4 for Telemetry Log: Not Supported 00:10:04.595 Error Log Page Entries Supported: 128 00:10:04.595 Keep Alive: Supported 00:10:04.595 Keep Alive Granularity: 10000 ms 00:10:04.595 00:10:04.595 NVM Command Set Attributes 00:10:04.595 ========================== 00:10:04.595 Submission Queue Entry Size 00:10:04.595 Max: 64 00:10:04.595 Min: 64 00:10:04.595 Completion Queue Entry Size 00:10:04.595 Max: 16 00:10:04.595 Min: 16 00:10:04.595 Number of Namespaces: 32 00:10:04.596 Compare Command: Supported 00:10:04.596 Write Uncorrectable Command: Not Supported 00:10:04.596 Dataset Management Command: Supported 00:10:04.596 Write Zeroes Command: Supported 00:10:04.596 Set Features Save Field: Not Supported 00:10:04.596 Reservations: Not Supported 00:10:04.596 Timestamp: Not Supported 00:10:04.596 Copy: Supported 00:10:04.596 Volatile Write Cache: Present 00:10:04.596 Atomic Write Unit (Normal): 1 00:10:04.596 Atomic Write Unit (PFail): 1 00:10:04.596 Atomic Compare & Write Unit: 1 00:10:04.596 Fused Compare & Write: Supported 00:10:04.596 Scatter-Gather List 00:10:04.596 SGL Command Set: Supported (Dword aligned) 00:10:04.596 SGL Keyed: Not Supported 00:10:04.596 SGL Bit Bucket Descriptor: Not Supported 00:10:04.596 SGL Metadata Pointer: Not Supported 00:10:04.596 Oversized SGL: Not Supported 00:10:04.596 SGL Metadata Address: Not Supported 00:10:04.596 SGL Offset: Not Supported 00:10:04.596 Transport SGL Data Block: Not Supported 00:10:04.596 Replay Protected Memory Block: Not Supported 00:10:04.596 00:10:04.596 Firmware Slot Information 00:10:04.596 ========================= 00:10:04.596 Active slot: 1 00:10:04.596 Slot 1 Firmware Revision: 24.05 00:10:04.596 00:10:04.596 00:10:04.596 Commands Supported and Effects 00:10:04.596 ============================== 00:10:04.596 Admin Commands 00:10:04.596 -------------- 00:10:04.596 Get Log Page (02h): Supported 00:10:04.596 Identify (06h): Supported 00:10:04.596 Abort (08h): Supported 00:10:04.596 Set Features (09h): Supported 00:10:04.596 Get Features (0Ah): Supported 00:10:04.596 Asynchronous Event Request (0Ch): Supported 00:10:04.596 Keep Alive (18h): Supported 00:10:04.596 I/O Commands 00:10:04.596 ------------ 00:10:04.596 Flush (00h): Supported LBA-Change 00:10:04.596 Write (01h): Supported LBA-Change 00:10:04.596 Read (02h): Supported 00:10:04.596 Compare (05h): Supported 00:10:04.596 Write Zeroes (08h): Supported LBA-Change 00:10:04.596 Dataset Management (09h): Supported LBA-Change 00:10:04.596 Copy (19h): Supported LBA-Change 00:10:04.596 Unknown (79h): Supported LBA-Change 00:10:04.596 Unknown (7Ah): Supported 00:10:04.596 00:10:04.596 Error Log 00:10:04.596 ========= 00:10:04.596 00:10:04.596 Arbitration 00:10:04.596 =========== 00:10:04.596 Arbitration Burst: 1 00:10:04.596 00:10:04.596 Power Management 00:10:04.596 ================ 00:10:04.596 Number of Power States: 1 00:10:04.596 Current Power State: Power State #0 00:10:04.596 Power State #0: 00:10:04.596 Max Power: 0.00 W 00:10:04.596 Non-Operational State: Operational 00:10:04.596 Entry Latency: Not Reported 00:10:04.596 Exit Latency: Not Reported 00:10:04.596 Relative Read Throughput: 0 00:10:04.596 Relative Read Latency: 0 00:10:04.596 Relative Write Throughput: 0 00:10:04.596 Relative Write Latency: 0 00:10:04.596 Idle Power: Not Reported 00:10:04.596 Active Power: Not Reported 00:10:04.596 Non-Operational Permissive Mode: Not Supported 00:10:04.596 00:10:04.596 Health Information 00:10:04.596 ================== 00:10:04.596 Critical Warnings: 00:10:04.596 Available Spare Space: OK 00:10:04.596 Temperature: OK 00:10:04.596 Device Reliability: OK 00:10:04.596 Read Only: No 00:10:04.596 Volatile Memory Backup: OK 00:10:04.596 Current Temperature: 0 Kelvin (-2[2024-04-24 16:06:05.845823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:04.596 [2024-04-24 16:06:05.845844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:04.596 [2024-04-24 16:06:05.845882] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:04.596 [2024-04-24 16:06:05.845900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.596 [2024-04-24 16:06:05.845912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.596 [2024-04-24 16:06:05.845923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.596 [2024-04-24 16:06:05.845933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.596 [2024-04-24 16:06:05.846432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:04.596 [2024-04-24 16:06:05.846453] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:04.596 [2024-04-24 16:06:05.847429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:04.596 [2024-04-24 16:06:05.847517] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:04.596 [2024-04-24 16:06:05.847532] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:04.596 [2024-04-24 16:06:05.848439] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:04.596 [2024-04-24 16:06:05.848461] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:04.596 [2024-04-24 16:06:05.848516] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:04.859 [2024-04-24 16:06:05.850480] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:04.859 73 Celsius) 00:10:04.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:04.859 Available Spare: 0% 00:10:04.859 Available Spare Threshold: 0% 00:10:04.859 Life Percentage Used: 0% 00:10:04.859 Data Units Read: 0 00:10:04.859 Data Units Written: 0 00:10:04.859 Host Read Commands: 0 00:10:04.859 Host Write Commands: 0 00:10:04.859 Controller Busy Time: 0 minutes 00:10:04.859 Power Cycles: 0 00:10:04.859 Power On Hours: 0 hours 00:10:04.859 Unsafe Shutdowns: 0 00:10:04.859 Unrecoverable Media Errors: 0 00:10:04.859 Lifetime Error Log Entries: 0 00:10:04.859 Warning Temperature Time: 0 minutes 00:10:04.859 Critical Temperature Time: 0 minutes 00:10:04.859 00:10:04.859 Number of Queues 00:10:04.859 ================ 00:10:04.859 Number of I/O Submission Queues: 127 00:10:04.859 Number of I/O Completion Queues: 127 00:10:04.859 00:10:04.859 Active Namespaces 00:10:04.859 ================= 00:10:04.859 Namespace ID:1 00:10:04.859 Error Recovery Timeout: Unlimited 00:10:04.859 Command Set Identifier: NVM (00h) 00:10:04.859 Deallocate: Supported 00:10:04.859 Deallocated/Unwritten Error: Not Supported 00:10:04.859 Deallocated Read Value: Unknown 00:10:04.859 Deallocate in Write Zeroes: Not Supported 00:10:04.859 Deallocated Guard Field: 0xFFFF 00:10:04.859 Flush: Supported 00:10:04.859 Reservation: Supported 00:10:04.859 Namespace Sharing Capabilities: Multiple Controllers 00:10:04.859 Size (in LBAs): 131072 (0GiB) 00:10:04.859 Capacity (in LBAs): 131072 (0GiB) 00:10:04.859 Utilization (in LBAs): 131072 (0GiB) 00:10:04.859 NGUID: A308981BFCAD472AA27AB3D8CCC7D66E 00:10:04.859 UUID: a308981b-fcad-472a-a27a-b3d8ccc7d66e 00:10:04.859 Thin Provisioning: Not Supported 00:10:04.859 Per-NS Atomic Units: Yes 00:10:04.859 Atomic Boundary Size (Normal): 0 00:10:04.859 Atomic Boundary Size (PFail): 0 00:10:04.859 Atomic Boundary Offset: 0 00:10:04.859 Maximum Single Source Range Length: 65535 00:10:04.859 Maximum Copy Length: 65535 00:10:04.859 Maximum Source Range Count: 1 00:10:04.859 NGUID/EUI64 Never Reused: No 00:10:04.859 Namespace Write Protected: No 00:10:04.859 Number of LBA Formats: 1 00:10:04.859 Current LBA Format: LBA Format #00 00:10:04.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.859 00:10:04.859 16:06:05 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:04.859 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.859 [2024-04-24 16:06:06.079568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:10.225 [2024-04-24 16:06:11.103190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:10.225 Initializing NVMe Controllers 00:10:10.225 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:10.225 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:10.225 Initialization complete. Launching workers. 00:10:10.225 ======================================================== 00:10:10.225 Latency(us) 00:10:10.225 Device Information : IOPS MiB/s Average min max 00:10:10.225 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34503.40 134.78 3711.02 1176.13 8636.56 00:10:10.225 ======================================================== 00:10:10.225 Total : 34503.40 134.78 3711.02 1176.13 8636.56 00:10:10.225 00:10:10.225 16:06:11 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:10.225 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.225 [2024-04-24 16:06:11.337294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:15.492 [2024-04-24 16:06:16.375221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:15.492 Initializing NVMe Controllers 00:10:15.492 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:15.492 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:15.493 Initialization complete. Launching workers. 00:10:15.493 ======================================================== 00:10:15.493 Latency(us) 00:10:15.493 Device Information : IOPS MiB/s Average min max 00:10:15.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16024.63 62.60 7998.57 7546.61 15951.03 00:10:15.493 ======================================================== 00:10:15.493 Total : 16024.63 62.60 7998.57 7546.61 15951.03 00:10:15.493 00:10:15.493 16:06:16 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:15.493 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.493 [2024-04-24 16:06:16.584235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:20.766 [2024-04-24 16:06:21.656140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:20.766 Initializing NVMe Controllers 00:10:20.766 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:20.766 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:20.766 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:20.766 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:20.766 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:20.766 Initialization complete. Launching workers. 00:10:20.766 Starting thread on core 2 00:10:20.766 Starting thread on core 3 00:10:20.766 Starting thread on core 1 00:10:20.766 16:06:21 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:20.766 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.766 [2024-04-24 16:06:21.954172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:24.057 [2024-04-24 16:06:25.012380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:24.057 Initializing NVMe Controllers 00:10:24.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:24.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:24.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:24.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:24.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:24.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:24.057 Initialization complete. Launching workers. 00:10:24.057 Starting thread on core 1 with urgent priority queue 00:10:24.057 Starting thread on core 2 with urgent priority queue 00:10:24.057 Starting thread on core 3 with urgent priority queue 00:10:24.057 Starting thread on core 0 with urgent priority queue 00:10:24.057 SPDK bdev Controller (SPDK1 ) core 0: 4173.00 IO/s 23.96 secs/100000 ios 00:10:24.057 SPDK bdev Controller (SPDK1 ) core 1: 4386.00 IO/s 22.80 secs/100000 ios 00:10:24.057 SPDK bdev Controller (SPDK1 ) core 2: 4478.00 IO/s 22.33 secs/100000 ios 00:10:24.057 SPDK bdev Controller (SPDK1 ) core 3: 3837.67 IO/s 26.06 secs/100000 ios 00:10:24.057 ======================================================== 00:10:24.057 00:10:24.057 16:06:25 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:24.057 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.057 [2024-04-24 16:06:25.311234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:24.317 [2024-04-24 16:06:25.343708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:24.317 Initializing NVMe Controllers 00:10:24.317 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.317 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.317 Namespace ID: 1 size: 0GB 00:10:24.317 Initialization complete. 00:10:24.317 INFO: using host memory buffer for IO 00:10:24.317 Hello world! 00:10:24.317 16:06:25 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.577 [2024-04-24 16:06:25.619179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:25.516 Initializing NVMe Controllers 00:10:25.516 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:25.516 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:25.516 Initialization complete. Launching workers. 00:10:25.516 submit (in ns) avg, min, max = 8902.7, 3527.8, 4016004.4 00:10:25.516 complete (in ns) avg, min, max = 24598.8, 2045.6, 4997756.7 00:10:25.516 00:10:25.516 Submit histogram 00:10:25.516 ================ 00:10:25.516 Range in us Cumulative Count 00:10:25.516 3.508 - 3.532: 0.0074% ( 1) 00:10:25.516 3.532 - 3.556: 0.5666% ( 76) 00:10:25.516 3.556 - 3.579: 4.1651% ( 489) 00:10:25.516 3.579 - 3.603: 12.8045% ( 1174) 00:10:25.516 3.603 - 3.627: 22.8862% ( 1370) 00:10:25.516 3.627 - 3.650: 34.7413% ( 1611) 00:10:25.516 3.650 - 3.674: 43.1305% ( 1140) 00:10:25.516 3.674 - 3.698: 49.9816% ( 931) 00:10:25.516 3.698 - 3.721: 56.4648% ( 881) 00:10:25.516 3.721 - 3.745: 62.0355% ( 757) 00:10:25.516 3.745 - 3.769: 66.9880% ( 673) 00:10:25.516 3.769 - 3.793: 70.1965% ( 436) 00:10:25.516 3.793 - 3.816: 72.9561% ( 375) 00:10:25.516 3.816 - 3.840: 76.0247% ( 417) 00:10:25.516 3.840 - 3.864: 79.9176% ( 529) 00:10:25.516 3.864 - 3.887: 83.5087% ( 488) 00:10:25.516 3.887 - 3.911: 86.1579% ( 360) 00:10:25.516 3.911 - 3.935: 88.2405% ( 283) 00:10:25.516 3.935 - 3.959: 89.9478% ( 232) 00:10:25.516 3.959 - 3.982: 91.5667% ( 220) 00:10:25.516 3.982 - 4.006: 92.9134% ( 183) 00:10:25.516 4.006 - 4.030: 93.7155% ( 109) 00:10:25.516 4.030 - 4.053: 94.4220% ( 96) 00:10:25.516 4.053 - 4.077: 95.2020% ( 106) 00:10:25.516 4.077 - 4.101: 95.8201% ( 84) 00:10:25.516 4.101 - 4.124: 96.2543% ( 59) 00:10:25.516 4.124 - 4.148: 96.4972% ( 33) 00:10:25.516 4.148 - 4.172: 96.6664% ( 23) 00:10:25.516 4.172 - 4.196: 96.7989% ( 18) 00:10:25.516 4.196 - 4.219: 96.8872% ( 12) 00:10:25.516 4.219 - 4.243: 96.9902% ( 14) 00:10:25.516 4.243 - 4.267: 97.0712% ( 11) 00:10:25.516 4.267 - 4.290: 97.1668% ( 13) 00:10:25.516 4.290 - 4.314: 97.2478% ( 11) 00:10:25.516 4.314 - 4.338: 97.3729% ( 17) 00:10:25.516 4.338 - 4.361: 97.4170% ( 6) 00:10:25.516 4.361 - 4.385: 97.4906% ( 10) 00:10:25.516 4.385 - 4.409: 97.5053% ( 2) 00:10:25.516 4.409 - 4.433: 97.5348% ( 4) 00:10:25.516 4.433 - 4.456: 97.5421% ( 1) 00:10:25.516 4.456 - 4.480: 97.5495% ( 1) 00:10:25.516 4.480 - 4.504: 97.5642% ( 2) 00:10:25.516 4.504 - 4.527: 97.5716% ( 1) 00:10:25.516 4.527 - 4.551: 97.5863% ( 2) 00:10:25.516 4.551 - 4.575: 97.6010% ( 2) 00:10:25.516 4.599 - 4.622: 97.6157% ( 2) 00:10:25.516 4.622 - 4.646: 97.6231% ( 1) 00:10:25.516 4.646 - 4.670: 97.6452% ( 3) 00:10:25.516 4.670 - 4.693: 97.6599% ( 2) 00:10:25.516 4.693 - 4.717: 97.6967% ( 5) 00:10:25.516 4.717 - 4.741: 97.7261% ( 4) 00:10:25.516 4.741 - 4.764: 97.7555% ( 4) 00:10:25.516 4.764 - 4.788: 97.7850% ( 4) 00:10:25.516 4.788 - 4.812: 97.7997% ( 2) 00:10:25.516 4.812 - 4.836: 97.8365% ( 5) 00:10:25.516 4.836 - 4.859: 97.9027% ( 9) 00:10:25.516 4.859 - 4.883: 97.9248% ( 3) 00:10:25.516 4.883 - 4.907: 97.9689% ( 6) 00:10:25.516 4.907 - 4.930: 97.9837% ( 2) 00:10:25.516 4.930 - 4.954: 98.0205% ( 5) 00:10:25.516 4.954 - 4.978: 98.0352% ( 2) 00:10:25.516 4.978 - 5.001: 98.0425% ( 1) 00:10:25.516 5.001 - 5.025: 98.0646% ( 3) 00:10:25.516 5.025 - 5.049: 98.0793% ( 2) 00:10:25.516 5.049 - 5.073: 98.1014% ( 3) 00:10:25.516 5.073 - 5.096: 98.1456% ( 6) 00:10:25.516 5.096 - 5.120: 98.1676% ( 3) 00:10:25.516 5.120 - 5.144: 98.1971% ( 4) 00:10:25.516 5.144 - 5.167: 98.2191% ( 3) 00:10:25.516 5.191 - 5.215: 98.2339% ( 2) 00:10:25.516 5.215 - 5.239: 98.2486% ( 2) 00:10:25.516 5.262 - 5.286: 98.2559% ( 1) 00:10:25.516 5.286 - 5.310: 98.2633% ( 1) 00:10:25.516 5.310 - 5.333: 98.2707% ( 1) 00:10:25.516 5.333 - 5.357: 98.2854% ( 2) 00:10:25.516 5.404 - 5.428: 98.2927% ( 1) 00:10:25.516 5.452 - 5.476: 98.3001% ( 1) 00:10:25.516 5.523 - 5.547: 98.3148% ( 2) 00:10:25.516 5.547 - 5.570: 98.3222% ( 1) 00:10:25.516 5.618 - 5.641: 98.3295% ( 1) 00:10:25.516 5.713 - 5.736: 98.3442% ( 2) 00:10:25.516 5.807 - 5.831: 98.3516% ( 1) 00:10:25.517 5.831 - 5.855: 98.3590% ( 1) 00:10:25.517 5.855 - 5.879: 98.3663% ( 1) 00:10:25.517 5.879 - 5.902: 98.3810% ( 2) 00:10:25.517 5.902 - 5.926: 98.3958% ( 2) 00:10:25.517 5.950 - 5.973: 98.4031% ( 1) 00:10:25.517 5.973 - 5.997: 98.4105% ( 1) 00:10:25.517 5.997 - 6.021: 98.4178% ( 1) 00:10:25.517 6.068 - 6.116: 98.4252% ( 1) 00:10:25.517 6.116 - 6.163: 98.4326% ( 1) 00:10:25.517 6.163 - 6.210: 98.4473% ( 2) 00:10:25.517 6.305 - 6.353: 98.4546% ( 1) 00:10:25.517 6.353 - 6.400: 98.4620% ( 1) 00:10:25.517 6.447 - 6.495: 98.4694% ( 1) 00:10:25.517 6.495 - 6.542: 98.4767% ( 1) 00:10:25.517 6.542 - 6.590: 98.4841% ( 1) 00:10:25.517 6.732 - 6.779: 98.4988% ( 2) 00:10:25.517 6.827 - 6.874: 98.5135% ( 2) 00:10:25.517 6.969 - 7.016: 98.5209% ( 1) 00:10:25.517 7.064 - 7.111: 98.5356% ( 2) 00:10:25.517 7.111 - 7.159: 98.5503% ( 2) 00:10:25.517 7.159 - 7.206: 98.5797% ( 4) 00:10:25.517 7.206 - 7.253: 98.6018% ( 3) 00:10:25.517 7.253 - 7.301: 98.6092% ( 1) 00:10:25.517 7.301 - 7.348: 98.6239% ( 2) 00:10:25.517 7.396 - 7.443: 98.6312% ( 1) 00:10:25.517 7.443 - 7.490: 98.6386% ( 1) 00:10:25.517 7.585 - 7.633: 98.6533% ( 2) 00:10:25.517 7.633 - 7.680: 98.6680% ( 2) 00:10:25.517 7.680 - 7.727: 98.6754% ( 1) 00:10:25.517 7.727 - 7.775: 98.6828% ( 1) 00:10:25.517 7.775 - 7.822: 98.6901% ( 1) 00:10:25.517 7.822 - 7.870: 98.6975% ( 1) 00:10:25.517 7.870 - 7.917: 98.7048% ( 1) 00:10:25.517 7.917 - 7.964: 98.7269% ( 3) 00:10:25.517 7.964 - 8.012: 98.7343% ( 1) 00:10:25.517 8.107 - 8.154: 98.7416% ( 1) 00:10:25.517 8.154 - 8.201: 98.7563% ( 2) 00:10:25.517 8.249 - 8.296: 98.7637% ( 1) 00:10:25.517 8.344 - 8.391: 98.7711% ( 1) 00:10:25.517 8.391 - 8.439: 98.7784% ( 1) 00:10:25.517 8.439 - 8.486: 98.7858% ( 1) 00:10:25.517 8.581 - 8.628: 98.7931% ( 1) 00:10:25.517 8.628 - 8.676: 98.8005% ( 1) 00:10:25.517 8.723 - 8.770: 98.8079% ( 1) 00:10:25.517 8.770 - 8.818: 98.8152% ( 1) 00:10:25.517 8.865 - 8.913: 98.8299% ( 2) 00:10:25.517 8.913 - 8.960: 98.8447% ( 2) 00:10:25.517 9.007 - 9.055: 98.8520% ( 1) 00:10:25.517 9.102 - 9.150: 98.8594% ( 1) 00:10:25.517 9.434 - 9.481: 98.8667% ( 1) 00:10:25.517 9.529 - 9.576: 98.8741% ( 1) 00:10:25.517 9.576 - 9.624: 98.8814% ( 1) 00:10:25.517 9.766 - 9.813: 98.8888% ( 1) 00:10:25.517 9.813 - 9.861: 98.8962% ( 1) 00:10:25.517 9.861 - 9.908: 98.9035% ( 1) 00:10:25.517 10.477 - 10.524: 98.9109% ( 1) 00:10:25.517 10.667 - 10.714: 98.9182% ( 1) 00:10:25.517 10.714 - 10.761: 98.9256% ( 1) 00:10:25.517 10.856 - 10.904: 98.9330% ( 1) 00:10:25.517 11.093 - 11.141: 98.9403% ( 1) 00:10:25.517 11.330 - 11.378: 98.9477% ( 1) 00:10:25.517 11.757 - 11.804: 98.9550% ( 1) 00:10:25.517 12.516 - 12.610: 98.9698% ( 2) 00:10:25.517 12.800 - 12.895: 98.9771% ( 1) 00:10:25.517 12.990 - 13.084: 98.9845% ( 1) 00:10:25.517 13.274 - 13.369: 99.0065% ( 3) 00:10:25.517 13.464 - 13.559: 99.0213% ( 2) 00:10:25.517 13.653 - 13.748: 99.0286% ( 1) 00:10:25.517 13.938 - 14.033: 99.0360% ( 1) 00:10:25.517 14.222 - 14.317: 99.0433% ( 1) 00:10:25.517 15.265 - 15.360: 99.0507% ( 1) 00:10:25.517 16.024 - 16.119: 99.0581% ( 1) 00:10:25.517 17.067 - 17.161: 99.0801% ( 3) 00:10:25.517 17.161 - 17.256: 99.0875% ( 1) 00:10:25.517 17.446 - 17.541: 99.1096% ( 3) 00:10:25.517 17.541 - 17.636: 99.1317% ( 3) 00:10:25.517 17.636 - 17.730: 99.1979% ( 9) 00:10:25.517 17.730 - 17.825: 99.2420% ( 6) 00:10:25.517 17.825 - 17.920: 99.2788% ( 5) 00:10:25.517 17.920 - 18.015: 99.3671% ( 12) 00:10:25.517 18.015 - 18.110: 99.4113% ( 6) 00:10:25.517 18.110 - 18.204: 99.4702% ( 8) 00:10:25.517 18.204 - 18.299: 99.4922% ( 3) 00:10:25.517 18.299 - 18.394: 99.5437% ( 7) 00:10:25.517 18.394 - 18.489: 99.5732% ( 4) 00:10:25.517 18.489 - 18.584: 99.6541% ( 11) 00:10:25.517 18.584 - 18.679: 99.6836% ( 4) 00:10:25.517 18.679 - 18.773: 99.6909% ( 1) 00:10:25.517 18.773 - 18.868: 99.7204% ( 4) 00:10:25.517 18.868 - 18.963: 99.7572% ( 5) 00:10:25.517 18.963 - 19.058: 99.7645% ( 1) 00:10:25.517 19.058 - 19.153: 99.7792% ( 2) 00:10:25.517 19.153 - 19.247: 99.7866% ( 1) 00:10:25.517 19.247 - 19.342: 99.7940% ( 1) 00:10:25.517 19.342 - 19.437: 99.8160% ( 3) 00:10:25.517 19.532 - 19.627: 99.8234% ( 1) 00:10:25.517 19.627 - 19.721: 99.8455% ( 3) 00:10:25.517 19.721 - 19.816: 99.8528% ( 1) 00:10:25.517 19.816 - 19.911: 99.8602% ( 1) 00:10:25.517 20.670 - 20.764: 99.8675% ( 1) 00:10:25.517 24.083 - 24.178: 99.8749% ( 1) 00:10:25.517 3980.705 - 4004.978: 99.9706% ( 13) 00:10:25.517 4004.978 - 4029.250: 100.0000% ( 4) 00:10:25.517 00:10:25.517 Complete histogram 00:10:25.517 ================== 00:10:25.517 Range in us Cumulative Count 00:10:25.517 2.039 - 2.050: 0.2134% ( 29) 00:10:25.517 2.050 - 2.062: 7.5649% ( 999) 00:10:25.517 2.062 - 2.074: 13.3858% ( 791) 00:10:25.517 2.074 - 2.086: 20.0603% ( 907) 00:10:25.517 2.086 - 2.098: 50.8132% ( 4179) 00:10:25.517 2.098 - 2.110: 60.3797% ( 1300) 00:10:25.517 2.110 - 2.121: 63.2129% ( 385) 00:10:25.517 2.121 - 2.133: 67.7533% ( 617) 00:10:25.517 2.133 - 2.145: 69.0338% ( 174) 00:10:25.517 2.145 - 2.157: 74.4646% ( 738) 00:10:25.517 2.157 - 2.169: 86.3640% ( 1617) 00:10:25.517 2.169 - 2.181: 89.3517% ( 406) 00:10:25.517 2.181 - 2.193: 90.3231% ( 132) 00:10:25.517 2.193 - 2.204: 91.2871% ( 131) 00:10:25.517 2.204 - 2.216: 91.9641% ( 92) 00:10:25.517 2.216 - 2.228: 92.6337% ( 91) 00:10:25.517 2.228 - 2.240: 94.3337% ( 231) 00:10:25.517 2.240 - 2.252: 95.2094% ( 119) 00:10:25.517 2.252 - 2.264: 95.4448% ( 32) 00:10:25.517 2.264 - 2.276: 95.6435% ( 27) 00:10:25.517 2.276 - 2.287: 95.7760% ( 18) 00:10:25.517 2.287 - 2.299: 95.8422% ( 9) 00:10:25.517 2.299 - 2.311: 95.9305% ( 12) 00:10:25.517 2.311 - 2.323: 96.0924% ( 22) 00:10:25.517 2.323 - 2.335: 96.1587% ( 9) 00:10:25.517 2.335 - 2.347: 96.2911% ( 18) 00:10:25.517 2.347 - 2.359: 96.4677% ( 24) 00:10:25.517 2.359 - 2.370: 96.6664% ( 27) 00:10:25.517 2.370 - 2.382: 96.9240% ( 35) 00:10:25.517 2.382 - 2.394: 97.2993% ( 51) 00:10:25.517 2.394 - 2.406: 97.6231% ( 44) 00:10:25.517 2.406 - 2.418: 97.7850% ( 22) 00:10:25.517 2.418 - 2.430: 97.9763% ( 26) 00:10:25.517 2.430 - 2.441: 98.1161% ( 19) 00:10:25.517 2.441 - 2.453: 98.2412% ( 17) 00:10:25.517 2.453 - 2.465: 98.3222% ( 11) 00:10:25.517 2.465 - 2.477: 98.3295% ( 1) 00:10:25.517 2.477 - 2.489: 98.3442% ( 2) 00:10:25.517 2.489 - 2.501: 98.4031% ( 8) 00:10:25.517 2.501 - 2.513: 98.4399% ( 5) 00:10:25.517 2.513 - 2.524: 98.4546% ( 2) 00:10:25.517 2.524 - 2.536: 98.4620% ( 1) 00:10:25.517 2.536 - 2.548: 98.4694% ( 1) 00:10:25.517 2.548 - 2.560: 98.4841% ( 2) 00:10:25.517 2.584 - 2.596: 98.4914% ( 1) 00:10:25.517 2.607 - 2.619: 98.4988% ( 1) 00:10:25.517 2.619 - 2.631: 98.5135% ( 2) 00:10:25.517 2.631 - 2.643: 98.5209% ( 1) 00:10:25.517 2.667 - 2.679: 98.5282% ( 1) 00:10:25.517 2.679 - 2.690: 98.5429% ( 2) 00:10:25.517 2.702 - 2.714: 98.5503% ( 1) 00:10:25.517 2.714 - 2.726: 98.5577% ( 1) 00:10:25.517 2.726 - 2.738: 98.5650% ( 1) 00:10:25.517 2.797 - 2.809: 98.5724% ( 1) 00:10:25.517 2.821 - 2.833: 98.5797% ( 1) 00:10:25.517 2.868 - 2.880: 98.5871% ( 1) 00:10:25.517 3.058 - 3.081: 98.5945% ( 1) 00:10:25.517 3.247 - 3.271: 98.6018% ( 1) 00:10:25.517 3.319 - 3.342: 98.6239% ( 3) 00:10:25.517 3.342 - 3.366: 98.6386% ( 2) 00:10:25.517 3.366 - 3.390: 98.6533% ( 2) 00:10:25.517 3.413 - 3.437: 98.6607% ( 1) 00:10:25.517 3.461 - 3.484: 98.6680% ( 1) 00:10:25.517 3.484 - 3.508: 98.6828% ( 2) 00:10:25.517 3.556 - 3.579: 98.6901% ( 1) 00:10:25.517 3.579 - 3.603: 98.7122% ( 3) 00:10:25.517 3.603 - 3.627: 9[2024-04-24 16:06:26.641233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:25.517 8.7269% ( 2) 00:10:25.517 3.627 - 3.650: 98.7343% ( 1) 00:10:25.517 3.650 - 3.674: 98.7416% ( 1) 00:10:25.517 3.793 - 3.816: 98.7490% ( 1) 00:10:25.517 3.816 - 3.840: 98.7563% ( 1) 00:10:25.517 3.911 - 3.935: 98.7637% ( 1) 00:10:25.517 4.907 - 4.930: 98.7784% ( 2) 00:10:25.517 5.049 - 5.073: 98.7858% ( 1) 00:10:25.517 5.120 - 5.144: 98.7931% ( 1) 00:10:25.517 5.310 - 5.333: 98.8005% ( 1) 00:10:25.517 5.381 - 5.404: 98.8079% ( 1) 00:10:25.517 5.784 - 5.807: 98.8152% ( 1) 00:10:25.517 5.807 - 5.831: 98.8226% ( 1) 00:10:25.517 5.879 - 5.902: 98.8299% ( 1) 00:10:25.517 5.902 - 5.926: 98.8373% ( 1) 00:10:25.517 5.973 - 5.997: 98.8447% ( 1) 00:10:25.517 5.997 - 6.021: 98.8594% ( 2) 00:10:25.517 6.068 - 6.116: 98.8667% ( 1) 00:10:25.517 6.305 - 6.353: 98.8741% ( 1) 00:10:25.517 6.353 - 6.400: 98.8814% ( 1) 00:10:25.517 6.400 - 6.447: 98.9035% ( 3) 00:10:25.517 6.827 - 6.874: 98.9109% ( 1) 00:10:25.517 6.969 - 7.016: 98.9182% ( 1) 00:10:25.517 7.064 - 7.111: 98.9256% ( 1) 00:10:25.517 7.159 - 7.206: 98.9330% ( 1) 00:10:25.517 7.206 - 7.253: 98.9403% ( 1) 00:10:25.517 7.443 - 7.490: 98.9477% ( 1) 00:10:25.517 7.585 - 7.633: 98.9550% ( 1) 00:10:25.517 10.856 - 10.904: 98.9624% ( 1) 00:10:25.517 15.455 - 15.550: 98.9698% ( 1) 00:10:25.517 15.550 - 15.644: 98.9771% ( 1) 00:10:25.517 15.739 - 15.834: 99.0139% ( 5) 00:10:25.517 15.834 - 15.929: 99.0213% ( 1) 00:10:25.517 15.929 - 16.024: 99.0507% ( 4) 00:10:25.517 16.024 - 16.119: 99.0654% ( 2) 00:10:25.517 16.119 - 16.213: 99.0875% ( 3) 00:10:25.517 16.213 - 16.308: 99.1464% ( 8) 00:10:25.517 16.308 - 16.403: 99.1537% ( 1) 00:10:25.517 16.403 - 16.498: 99.1611% ( 1) 00:10:25.517 16.498 - 16.593: 99.1832% ( 3) 00:10:25.517 16.593 - 16.687: 99.1979% ( 2) 00:10:25.517 16.687 - 16.782: 99.2273% ( 4) 00:10:25.518 16.782 - 16.877: 99.3156% ( 12) 00:10:25.518 16.877 - 16.972: 99.3303% ( 2) 00:10:25.518 16.972 - 17.067: 99.3451% ( 2) 00:10:25.518 17.067 - 17.161: 99.3524% ( 1) 00:10:25.518 17.161 - 17.256: 99.3598% ( 1) 00:10:25.518 17.256 - 17.351: 99.3819% ( 3) 00:10:25.518 17.446 - 17.541: 99.3892% ( 1) 00:10:25.518 17.541 - 17.636: 99.3966% ( 1) 00:10:25.518 17.636 - 17.730: 99.4039% ( 1) 00:10:25.518 17.730 - 17.825: 99.4113% ( 1) 00:10:25.518 17.920 - 18.015: 99.4186% ( 1) 00:10:25.518 18.868 - 18.963: 99.4260% ( 1) 00:10:25.518 19.342 - 19.437: 99.4334% ( 1) 00:10:25.518 1152.948 - 1159.016: 99.4407% ( 1) 00:10:25.518 2160.261 - 2172.397: 99.4481% ( 1) 00:10:25.518 3980.705 - 4004.978: 99.8675% ( 57) 00:10:25.518 4004.978 - 4029.250: 99.9926% ( 17) 00:10:25.518 4975.881 - 5000.154: 100.0000% ( 1) 00:10:25.518 00:10:25.518 16:06:26 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:25.518 16:06:26 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:25.518 16:06:26 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:25.518 16:06:26 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:25.518 16:06:26 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:25.776 [2024-04-24 16:06:26.904408] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:25.776 [ 00:10:25.776 { 00:10:25.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:25.776 "subtype": "Discovery", 00:10:25.776 "listen_addresses": [], 00:10:25.776 "allow_any_host": true, 00:10:25.776 "hosts": [] 00:10:25.776 }, 00:10:25.776 { 00:10:25.776 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:25.776 "subtype": "NVMe", 00:10:25.776 "listen_addresses": [ 00:10:25.776 { 00:10:25.776 "transport": "VFIOUSER", 00:10:25.776 "trtype": "VFIOUSER", 00:10:25.776 "adrfam": "IPv4", 00:10:25.776 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:25.776 "trsvcid": "0" 00:10:25.776 } 00:10:25.776 ], 00:10:25.776 "allow_any_host": true, 00:10:25.776 "hosts": [], 00:10:25.776 "serial_number": "SPDK1", 00:10:25.776 "model_number": "SPDK bdev Controller", 00:10:25.776 "max_namespaces": 32, 00:10:25.776 "min_cntlid": 1, 00:10:25.776 "max_cntlid": 65519, 00:10:25.776 "namespaces": [ 00:10:25.776 { 00:10:25.776 "nsid": 1, 00:10:25.776 "bdev_name": "Malloc1", 00:10:25.776 "name": "Malloc1", 00:10:25.776 "nguid": "A308981BFCAD472AA27AB3D8CCC7D66E", 00:10:25.776 "uuid": "a308981b-fcad-472a-a27a-b3d8ccc7d66e" 00:10:25.776 } 00:10:25.776 ] 00:10:25.776 }, 00:10:25.776 { 00:10:25.776 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:25.776 "subtype": "NVMe", 00:10:25.776 "listen_addresses": [ 00:10:25.776 { 00:10:25.776 "transport": "VFIOUSER", 00:10:25.776 "trtype": "VFIOUSER", 00:10:25.776 "adrfam": "IPv4", 00:10:25.776 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:25.776 "trsvcid": "0" 00:10:25.776 } 00:10:25.776 ], 00:10:25.776 "allow_any_host": true, 00:10:25.776 "hosts": [], 00:10:25.776 "serial_number": "SPDK2", 00:10:25.776 "model_number": "SPDK bdev Controller", 00:10:25.776 "max_namespaces": 32, 00:10:25.776 "min_cntlid": 1, 00:10:25.776 "max_cntlid": 65519, 00:10:25.776 "namespaces": [ 00:10:25.776 { 00:10:25.776 "nsid": 1, 00:10:25.776 "bdev_name": "Malloc2", 00:10:25.776 "name": "Malloc2", 00:10:25.776 "nguid": "A61AC55A39A54D53BA0059CEFAC7D2A0", 00:10:25.776 "uuid": "a61ac55a-39a5-4d53-ba00-59cefac7d2a0" 00:10:25.776 } 00:10:25.776 ] 00:10:25.776 } 00:10:25.776 ] 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3351977 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:25.776 16:06:26 -- common/autotest_common.sh@1251 -- # local i=0 00:10:25.776 16:06:26 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:25.776 16:06:26 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:25.776 16:06:26 -- common/autotest_common.sh@1262 -- # return 0 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:25.776 16:06:26 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:25.776 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.033 [2024-04-24 16:06:27.078185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:26.033 Malloc3 00:10:26.033 16:06:27 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:26.290 [2024-04-24 16:06:27.453843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:26.290 16:06:27 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:26.290 Asynchronous Event Request test 00:10:26.290 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:26.290 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:26.290 Registering asynchronous event callbacks... 00:10:26.290 Starting namespace attribute notice tests for all controllers... 00:10:26.291 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:26.291 aer_cb - Changed Namespace 00:10:26.291 Cleaning up... 00:10:26.549 [ 00:10:26.549 { 00:10:26.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:26.549 "subtype": "Discovery", 00:10:26.549 "listen_addresses": [], 00:10:26.549 "allow_any_host": true, 00:10:26.549 "hosts": [] 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:26.549 "subtype": "NVMe", 00:10:26.549 "listen_addresses": [ 00:10:26.549 { 00:10:26.549 "transport": "VFIOUSER", 00:10:26.549 "trtype": "VFIOUSER", 00:10:26.549 "adrfam": "IPv4", 00:10:26.549 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:26.549 "trsvcid": "0" 00:10:26.549 } 00:10:26.549 ], 00:10:26.549 "allow_any_host": true, 00:10:26.549 "hosts": [], 00:10:26.549 "serial_number": "SPDK1", 00:10:26.549 "model_number": "SPDK bdev Controller", 00:10:26.549 "max_namespaces": 32, 00:10:26.549 "min_cntlid": 1, 00:10:26.549 "max_cntlid": 65519, 00:10:26.549 "namespaces": [ 00:10:26.549 { 00:10:26.549 "nsid": 1, 00:10:26.549 "bdev_name": "Malloc1", 00:10:26.549 "name": "Malloc1", 00:10:26.549 "nguid": "A308981BFCAD472AA27AB3D8CCC7D66E", 00:10:26.549 "uuid": "a308981b-fcad-472a-a27a-b3d8ccc7d66e" 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "nsid": 2, 00:10:26.549 "bdev_name": "Malloc3", 00:10:26.549 "name": "Malloc3", 00:10:26.549 "nguid": "88337F384BD9467FBE6A0D816B818147", 00:10:26.549 "uuid": "88337f38-4bd9-467f-be6a-0d816b818147" 00:10:26.549 } 00:10:26.549 ] 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:26.549 "subtype": "NVMe", 00:10:26.549 "listen_addresses": [ 00:10:26.549 { 00:10:26.549 "transport": "VFIOUSER", 00:10:26.549 "trtype": "VFIOUSER", 00:10:26.549 "adrfam": "IPv4", 00:10:26.549 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:26.549 "trsvcid": "0" 00:10:26.549 } 00:10:26.549 ], 00:10:26.549 "allow_any_host": true, 00:10:26.549 "hosts": [], 00:10:26.549 "serial_number": "SPDK2", 00:10:26.549 "model_number": "SPDK bdev Controller", 00:10:26.549 "max_namespaces": 32, 00:10:26.549 "min_cntlid": 1, 00:10:26.549 "max_cntlid": 65519, 00:10:26.549 "namespaces": [ 00:10:26.549 { 00:10:26.549 "nsid": 1, 00:10:26.549 "bdev_name": "Malloc2", 00:10:26.549 "name": "Malloc2", 00:10:26.549 "nguid": "A61AC55A39A54D53BA0059CEFAC7D2A0", 00:10:26.549 "uuid": "a61ac55a-39a5-4d53-ba00-59cefac7d2a0" 00:10:26.549 } 00:10:26.549 ] 00:10:26.549 } 00:10:26.549 ] 00:10:26.549 16:06:27 -- target/nvmf_vfio_user.sh@44 -- # wait 3351977 00:10:26.549 16:06:27 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:26.549 16:06:27 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:26.549 16:06:27 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:26.549 16:06:27 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:26.549 [2024-04-24 16:06:27.727503] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:10:26.549 [2024-04-24 16:06:27.727550] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352109 ] 00:10:26.549 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.549 [2024-04-24 16:06:27.759825] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:26.549 [2024-04-24 16:06:27.764127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:26.549 [2024-04-24 16:06:27.764156] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc338ef0000 00:10:26.549 [2024-04-24 16:06:27.765133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:26.549 [2024-04-24 16:06:27.766135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:26.549 [2024-04-24 16:06:27.767142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:26.549 [2024-04-24 16:06:27.768150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:26.549 [2024-04-24 16:06:27.769156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:26.550 [2024-04-24 16:06:27.770164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:26.550 [2024-04-24 16:06:27.771167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:26.550 [2024-04-24 16:06:27.772173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:26.550 [2024-04-24 16:06:27.773188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:26.550 [2024-04-24 16:06:27.773214] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc338ee5000 00:10:26.550 [2024-04-24 16:06:27.774324] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:26.550 [2024-04-24 16:06:27.793000] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:26.550 [2024-04-24 16:06:27.793033] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:26.550 [2024-04-24 16:06:27.795101] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:26.550 [2024-04-24 16:06:27.795156] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:26.550 [2024-04-24 16:06:27.795244] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:26.550 [2024-04-24 16:06:27.795269] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:26.550 [2024-04-24 16:06:27.795278] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:26.550 [2024-04-24 16:06:27.796104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:26.550 [2024-04-24 16:06:27.796124] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:26.550 [2024-04-24 16:06:27.796136] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:26.550 [2024-04-24 16:06:27.797104] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:26.550 [2024-04-24 16:06:27.797124] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:26.550 [2024-04-24 16:06:27.797138] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.798109] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:26.550 [2024-04-24 16:06:27.798129] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.799119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:26.550 [2024-04-24 16:06:27.799139] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:26.550 [2024-04-24 16:06:27.799148] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.799160] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.799269] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:26.550 [2024-04-24 16:06:27.799277] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.799285] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:26.550 [2024-04-24 16:06:27.800122] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:26.550 [2024-04-24 16:06:27.801125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:26.550 [2024-04-24 16:06:27.802138] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:26.550 [2024-04-24 16:06:27.803132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.550 [2024-04-24 16:06:27.803210] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:26.550 [2024-04-24 16:06:27.804147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:26.550 [2024-04-24 16:06:27.804166] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:26.550 [2024-04-24 16:06:27.804178] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:26.550 [2024-04-24 16:06:27.804203] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:26.550 [2024-04-24 16:06:27.804215] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:26.550 [2024-04-24 16:06:27.804236] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:26.550 [2024-04-24 16:06:27.804245] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:26.550 [2024-04-24 16:06:27.804262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:26.550 [2024-04-24 16:06:27.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:26.550 [2024-04-24 16:06:27.810778] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:26.550 [2024-04-24 16:06:27.810787] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:26.550 [2024-04-24 16:06:27.810795] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:26.550 [2024-04-24 16:06:27.810802] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:26.550 [2024-04-24 16:06:27.810810] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:26.550 [2024-04-24 16:06:27.810817] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:26.550 [2024-04-24 16:06:27.810825] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:26.550 [2024-04-24 16:06:27.810837] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:26.550 [2024-04-24 16:06:27.810852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:26.550 [2024-04-24 16:06:27.818753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:26.550 [2024-04-24 16:06:27.818788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.551 [2024-04-24 16:06:27.818802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.551 [2024-04-24 16:06:27.818815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.551 [2024-04-24 16:06:27.818826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.551 [2024-04-24 16:06:27.818835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:26.551 [2024-04-24 16:06:27.818850] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:26.551 [2024-04-24 16:06:27.818865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:26.551 [2024-04-24 16:06:27.826751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:26.551 [2024-04-24 16:06:27.826783] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:26.551 [2024-04-24 16:06:27.826794] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:26.551 [2024-04-24 16:06:27.826810] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:26.551 [2024-04-24 16:06:27.826821] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:26.551 [2024-04-24 16:06:27.826835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:26.809 [2024-04-24 16:06:27.834755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:26.809 [2024-04-24 16:06:27.834814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.834830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.834843] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:26.809 [2024-04-24 16:06:27.834852] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:26.809 [2024-04-24 16:06:27.834862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:26.809 [2024-04-24 16:06:27.842752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:26.809 [2024-04-24 16:06:27.842774] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:26.809 [2024-04-24 16:06:27.842793] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.842808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.842821] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:26.809 [2024-04-24 16:06:27.842829] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:26.809 [2024-04-24 16:06:27.842839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:26.809 [2024-04-24 16:06:27.850754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:26.809 [2024-04-24 16:06:27.850781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.850796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:26.809 [2024-04-24 16:06:27.850810] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:26.809 [2024-04-24 16:06:27.850818] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:26.809 [2024-04-24 16:06:27.850828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:26.809 [2024-04-24 16:06:27.858753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.858775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858807] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858817] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858825] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858833] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:26.810 [2024-04-24 16:06:27.858841] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:26.810 [2024-04-24 16:06:27.858849] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:26.810 [2024-04-24 16:06:27.858872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.866766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.866792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.874754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.874778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.882753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.882777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.890779] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:26.810 [2024-04-24 16:06:27.890789] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:26.810 [2024-04-24 16:06:27.890796] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:26.810 [2024-04-24 16:06:27.890802] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:26.810 [2024-04-24 16:06:27.890812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:26.810 [2024-04-24 16:06:27.890823] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:26.810 [2024-04-24 16:06:27.890832] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:26.810 [2024-04-24 16:06:27.890841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.890852] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:26.810 [2024-04-24 16:06:27.890860] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:26.810 [2024-04-24 16:06:27.890869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.890885] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:26.810 [2024-04-24 16:06:27.890894] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:26.810 [2024-04-24 16:06:27.890903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:26.810 [2024-04-24 16:06:27.898757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.898787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.898803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:26.810 [2024-04-24 16:06:27.898816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:26.810 ===================================================== 00:10:26.810 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:26.810 ===================================================== 00:10:26.810 Controller Capabilities/Features 00:10:26.810 ================================ 00:10:26.810 Vendor ID: 4e58 00:10:26.810 Subsystem Vendor ID: 4e58 00:10:26.810 Serial Number: SPDK2 00:10:26.810 Model Number: SPDK bdev Controller 00:10:26.810 Firmware Version: 24.05 00:10:26.810 Recommended Arb Burst: 6 00:10:26.810 IEEE OUI Identifier: 8d 6b 50 00:10:26.810 Multi-path I/O 00:10:26.810 May have multiple subsystem ports: Yes 00:10:26.810 May have multiple controllers: Yes 00:10:26.810 Associated with SR-IOV VF: No 00:10:26.810 Max Data Transfer Size: 131072 00:10:26.810 Max Number of Namespaces: 32 00:10:26.810 Max Number of I/O Queues: 127 00:10:26.810 NVMe Specification Version (VS): 1.3 00:10:26.810 NVMe Specification Version (Identify): 1.3 00:10:26.810 Maximum Queue Entries: 256 00:10:26.810 Contiguous Queues Required: Yes 00:10:26.810 Arbitration Mechanisms Supported 00:10:26.810 Weighted Round Robin: Not Supported 00:10:26.810 Vendor Specific: Not Supported 00:10:26.810 Reset Timeout: 15000 ms 00:10:26.810 Doorbell Stride: 4 bytes 00:10:26.810 NVM Subsystem Reset: Not Supported 00:10:26.810 Command Sets Supported 00:10:26.810 NVM Command Set: Supported 00:10:26.810 Boot Partition: Not Supported 00:10:26.810 Memory Page Size Minimum: 4096 bytes 00:10:26.810 Memory Page Size Maximum: 4096 bytes 00:10:26.810 Persistent Memory Region: Not Supported 00:10:26.810 Optional Asynchronous Events Supported 00:10:26.810 Namespace Attribute Notices: Supported 00:10:26.810 Firmware Activation Notices: Not Supported 00:10:26.810 ANA Change Notices: Not Supported 00:10:26.810 PLE Aggregate Log Change Notices: Not Supported 00:10:26.810 LBA Status Info Alert Notices: Not Supported 00:10:26.810 EGE Aggregate Log Change Notices: Not Supported 00:10:26.810 Normal NVM Subsystem Shutdown event: Not Supported 00:10:26.810 Zone Descriptor Change Notices: Not Supported 00:10:26.810 Discovery Log Change Notices: Not Supported 00:10:26.810 Controller Attributes 00:10:26.810 128-bit Host Identifier: Supported 00:10:26.810 Non-Operational Permissive Mode: Not Supported 00:10:26.810 NVM Sets: Not Supported 00:10:26.810 Read Recovery Levels: Not Supported 00:10:26.810 Endurance Groups: Not Supported 00:10:26.810 Predictable Latency Mode: Not Supported 00:10:26.810 Traffic Based Keep ALive: Not Supported 00:10:26.810 Namespace Granularity: Not Supported 00:10:26.810 SQ Associations: Not Supported 00:10:26.810 UUID List: Not Supported 00:10:26.810 Multi-Domain Subsystem: Not Supported 00:10:26.810 Fixed Capacity Management: Not Supported 00:10:26.810 Variable Capacity Management: Not Supported 00:10:26.810 Delete Endurance Group: Not Supported 00:10:26.810 Delete NVM Set: Not Supported 00:10:26.810 Extended LBA Formats Supported: Not Supported 00:10:26.810 Flexible Data Placement Supported: Not Supported 00:10:26.810 00:10:26.810 Controller Memory Buffer Support 00:10:26.810 ================================ 00:10:26.810 Supported: No 00:10:26.810 00:10:26.810 Persistent Memory Region Support 00:10:26.810 ================================ 00:10:26.810 Supported: No 00:10:26.810 00:10:26.810 Admin Command Set Attributes 00:10:26.810 ============================ 00:10:26.810 Security Send/Receive: Not Supported 00:10:26.810 Format NVM: Not Supported 00:10:26.810 Firmware Activate/Download: Not Supported 00:10:26.810 Namespace Management: Not Supported 00:10:26.810 Device Self-Test: Not Supported 00:10:26.810 Directives: Not Supported 00:10:26.810 NVMe-MI: Not Supported 00:10:26.810 Virtualization Management: Not Supported 00:10:26.810 Doorbell Buffer Config: Not Supported 00:10:26.810 Get LBA Status Capability: Not Supported 00:10:26.810 Command & Feature Lockdown Capability: Not Supported 00:10:26.810 Abort Command Limit: 4 00:10:26.810 Async Event Request Limit: 4 00:10:26.810 Number of Firmware Slots: N/A 00:10:26.810 Firmware Slot 1 Read-Only: N/A 00:10:26.810 Firmware Activation Without Reset: N/A 00:10:26.810 Multiple Update Detection Support: N/A 00:10:26.810 Firmware Update Granularity: No Information Provided 00:10:26.810 Per-Namespace SMART Log: No 00:10:26.810 Asymmetric Namespace Access Log Page: Not Supported 00:10:26.810 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:26.810 Command Effects Log Page: Supported 00:10:26.810 Get Log Page Extended Data: Supported 00:10:26.810 Telemetry Log Pages: Not Supported 00:10:26.810 Persistent Event Log Pages: Not Supported 00:10:26.810 Supported Log Pages Log Page: May Support 00:10:26.810 Commands Supported & Effects Log Page: Not Supported 00:10:26.810 Feature Identifiers & Effects Log Page:May Support 00:10:26.810 NVMe-MI Commands & Effects Log Page: May Support 00:10:26.810 Data Area 4 for Telemetry Log: Not Supported 00:10:26.810 Error Log Page Entries Supported: 128 00:10:26.810 Keep Alive: Supported 00:10:26.810 Keep Alive Granularity: 10000 ms 00:10:26.810 00:10:26.810 NVM Command Set Attributes 00:10:26.810 ========================== 00:10:26.810 Submission Queue Entry Size 00:10:26.810 Max: 64 00:10:26.810 Min: 64 00:10:26.811 Completion Queue Entry Size 00:10:26.811 Max: 16 00:10:26.811 Min: 16 00:10:26.811 Number of Namespaces: 32 00:10:26.811 Compare Command: Supported 00:10:26.811 Write Uncorrectable Command: Not Supported 00:10:26.811 Dataset Management Command: Supported 00:10:26.811 Write Zeroes Command: Supported 00:10:26.811 Set Features Save Field: Not Supported 00:10:26.811 Reservations: Not Supported 00:10:26.811 Timestamp: Not Supported 00:10:26.811 Copy: Supported 00:10:26.811 Volatile Write Cache: Present 00:10:26.811 Atomic Write Unit (Normal): 1 00:10:26.811 Atomic Write Unit (PFail): 1 00:10:26.811 Atomic Compare & Write Unit: 1 00:10:26.811 Fused Compare & Write: Supported 00:10:26.811 Scatter-Gather List 00:10:26.811 SGL Command Set: Supported (Dword aligned) 00:10:26.811 SGL Keyed: Not Supported 00:10:26.811 SGL Bit Bucket Descriptor: Not Supported 00:10:26.811 SGL Metadata Pointer: Not Supported 00:10:26.811 Oversized SGL: Not Supported 00:10:26.811 SGL Metadata Address: Not Supported 00:10:26.811 SGL Offset: Not Supported 00:10:26.811 Transport SGL Data Block: Not Supported 00:10:26.811 Replay Protected Memory Block: Not Supported 00:10:26.811 00:10:26.811 Firmware Slot Information 00:10:26.811 ========================= 00:10:26.811 Active slot: 1 00:10:26.811 Slot 1 Firmware Revision: 24.05 00:10:26.811 00:10:26.811 00:10:26.811 Commands Supported and Effects 00:10:26.811 ============================== 00:10:26.811 Admin Commands 00:10:26.811 -------------- 00:10:26.811 Get Log Page (02h): Supported 00:10:26.811 Identify (06h): Supported 00:10:26.811 Abort (08h): Supported 00:10:26.811 Set Features (09h): Supported 00:10:26.811 Get Features (0Ah): Supported 00:10:26.811 Asynchronous Event Request (0Ch): Supported 00:10:26.811 Keep Alive (18h): Supported 00:10:26.811 I/O Commands 00:10:26.811 ------------ 00:10:26.811 Flush (00h): Supported LBA-Change 00:10:26.811 Write (01h): Supported LBA-Change 00:10:26.811 Read (02h): Supported 00:10:26.811 Compare (05h): Supported 00:10:26.811 Write Zeroes (08h): Supported LBA-Change 00:10:26.811 Dataset Management (09h): Supported LBA-Change 00:10:26.811 Copy (19h): Supported LBA-Change 00:10:26.811 Unknown (79h): Supported LBA-Change 00:10:26.811 Unknown (7Ah): Supported 00:10:26.811 00:10:26.811 Error Log 00:10:26.811 ========= 00:10:26.811 00:10:26.811 Arbitration 00:10:26.811 =========== 00:10:26.811 Arbitration Burst: 1 00:10:26.811 00:10:26.811 Power Management 00:10:26.811 ================ 00:10:26.811 Number of Power States: 1 00:10:26.811 Current Power State: Power State #0 00:10:26.811 Power State #0: 00:10:26.811 Max Power: 0.00 W 00:10:26.811 Non-Operational State: Operational 00:10:26.811 Entry Latency: Not Reported 00:10:26.811 Exit Latency: Not Reported 00:10:26.811 Relative Read Throughput: 0 00:10:26.811 Relative Read Latency: 0 00:10:26.811 Relative Write Throughput: 0 00:10:26.811 Relative Write Latency: 0 00:10:26.811 Idle Power: Not Reported 00:10:26.811 Active Power: Not Reported 00:10:26.811 Non-Operational Permissive Mode: Not Supported 00:10:26.811 00:10:26.811 Health Information 00:10:26.811 ================== 00:10:26.811 Critical Warnings: 00:10:26.811 Available Spare Space: OK 00:10:26.811 Temperature: OK 00:10:26.811 Device Reliability: OK 00:10:26.811 Read Only: No 00:10:26.811 Volatile Memory Backup: OK 00:10:26.811 Current Temperature: 0 Kelvin (-2[2024-04-24 16:06:27.898936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:26.811 [2024-04-24 16:06:27.906750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:26.811 [2024-04-24 16:06:27.906792] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:26.811 [2024-04-24 16:06:27.906809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.811 [2024-04-24 16:06:27.906820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.811 [2024-04-24 16:06:27.906830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.811 [2024-04-24 16:06:27.906840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.811 [2024-04-24 16:06:27.906925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:26.811 [2024-04-24 16:06:27.906946] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:26.811 [2024-04-24 16:06:27.907924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.811 [2024-04-24 16:06:27.908007] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:26.811 [2024-04-24 16:06:27.908022] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:26.811 [2024-04-24 16:06:27.908940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:26.811 [2024-04-24 16:06:27.908963] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:26.811 [2024-04-24 16:06:27.909015] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:26.811 [2024-04-24 16:06:27.910210] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:26.811 73 Celsius) 00:10:26.811 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:26.811 Available Spare: 0% 00:10:26.811 Available Spare Threshold: 0% 00:10:26.811 Life Percentage Used: 0% 00:10:26.811 Data Units Read: 0 00:10:26.811 Data Units Written: 0 00:10:26.811 Host Read Commands: 0 00:10:26.811 Host Write Commands: 0 00:10:26.811 Controller Busy Time: 0 minutes 00:10:26.811 Power Cycles: 0 00:10:26.811 Power On Hours: 0 hours 00:10:26.811 Unsafe Shutdowns: 0 00:10:26.811 Unrecoverable Media Errors: 0 00:10:26.811 Lifetime Error Log Entries: 0 00:10:26.811 Warning Temperature Time: 0 minutes 00:10:26.811 Critical Temperature Time: 0 minutes 00:10:26.811 00:10:26.811 Number of Queues 00:10:26.811 ================ 00:10:26.811 Number of I/O Submission Queues: 127 00:10:26.811 Number of I/O Completion Queues: 127 00:10:26.811 00:10:26.811 Active Namespaces 00:10:26.811 ================= 00:10:26.811 Namespace ID:1 00:10:26.811 Error Recovery Timeout: Unlimited 00:10:26.811 Command Set Identifier: NVM (00h) 00:10:26.811 Deallocate: Supported 00:10:26.811 Deallocated/Unwritten Error: Not Supported 00:10:26.811 Deallocated Read Value: Unknown 00:10:26.811 Deallocate in Write Zeroes: Not Supported 00:10:26.811 Deallocated Guard Field: 0xFFFF 00:10:26.811 Flush: Supported 00:10:26.811 Reservation: Supported 00:10:26.811 Namespace Sharing Capabilities: Multiple Controllers 00:10:26.811 Size (in LBAs): 131072 (0GiB) 00:10:26.811 Capacity (in LBAs): 131072 (0GiB) 00:10:26.811 Utilization (in LBAs): 131072 (0GiB) 00:10:26.811 NGUID: A61AC55A39A54D53BA0059CEFAC7D2A0 00:10:26.811 UUID: a61ac55a-39a5-4d53-ba00-59cefac7d2a0 00:10:26.811 Thin Provisioning: Not Supported 00:10:26.811 Per-NS Atomic Units: Yes 00:10:26.811 Atomic Boundary Size (Normal): 0 00:10:26.811 Atomic Boundary Size (PFail): 0 00:10:26.811 Atomic Boundary Offset: 0 00:10:26.811 Maximum Single Source Range Length: 65535 00:10:26.811 Maximum Copy Length: 65535 00:10:26.811 Maximum Source Range Count: 1 00:10:26.811 NGUID/EUI64 Never Reused: No 00:10:26.811 Namespace Write Protected: No 00:10:26.811 Number of LBA Formats: 1 00:10:26.811 Current LBA Format: LBA Format #00 00:10:26.811 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.811 00:10:26.811 16:06:27 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:26.811 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.071 [2024-04-24 16:06:28.137581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:32.348 [2024-04-24 16:06:33.243078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:32.348 Initializing NVMe Controllers 00:10:32.348 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:32.348 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:32.348 Initialization complete. Launching workers. 00:10:32.348 ======================================================== 00:10:32.349 Latency(us) 00:10:32.349 Device Information : IOPS MiB/s Average min max 00:10:32.349 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35221.94 137.59 3633.34 1178.22 7396.84 00:10:32.349 ======================================================== 00:10:32.349 Total : 35221.94 137.59 3633.34 1178.22 7396.84 00:10:32.349 00:10:32.349 16:06:33 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:32.349 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.349 [2024-04-24 16:06:33.476676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:37.619 [2024-04-24 16:06:38.494170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:37.619 Initializing NVMe Controllers 00:10:37.619 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:37.619 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:37.619 Initialization complete. Launching workers. 00:10:37.619 ======================================================== 00:10:37.619 Latency(us) 00:10:37.619 Device Information : IOPS MiB/s Average min max 00:10:37.619 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34275.38 133.89 3733.73 1171.67 10594.79 00:10:37.619 ======================================================== 00:10:37.619 Total : 34275.38 133.89 3733.73 1171.67 10594.79 00:10:37.619 00:10:37.619 16:06:38 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:37.619 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.619 [2024-04-24 16:06:38.704981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:42.893 [2024-04-24 16:06:43.843884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:42.893 Initializing NVMe Controllers 00:10:42.893 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:42.893 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:42.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:42.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:42.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:42.893 Initialization complete. Launching workers. 00:10:42.893 Starting thread on core 2 00:10:42.893 Starting thread on core 3 00:10:42.893 Starting thread on core 1 00:10:42.893 16:06:43 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:42.893 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.893 [2024-04-24 16:06:44.134177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:46.180 [2024-04-24 16:06:47.213921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:46.180 Initializing NVMe Controllers 00:10:46.180 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:46.180 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:46.180 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:46.180 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:46.180 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:46.180 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:46.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:46.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:46.180 Initialization complete. Launching workers. 00:10:46.180 Starting thread on core 1 with urgent priority queue 00:10:46.181 Starting thread on core 2 with urgent priority queue 00:10:46.181 Starting thread on core 3 with urgent priority queue 00:10:46.181 Starting thread on core 0 with urgent priority queue 00:10:46.181 SPDK bdev Controller (SPDK2 ) core 0: 6291.67 IO/s 15.89 secs/100000 ios 00:10:46.181 SPDK bdev Controller (SPDK2 ) core 1: 6707.33 IO/s 14.91 secs/100000 ios 00:10:46.181 SPDK bdev Controller (SPDK2 ) core 2: 6194.67 IO/s 16.14 secs/100000 ios 00:10:46.181 SPDK bdev Controller (SPDK2 ) core 3: 6476.67 IO/s 15.44 secs/100000 ios 00:10:46.181 ======================================================== 00:10:46.181 00:10:46.181 16:06:47 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:46.181 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.441 [2024-04-24 16:06:47.514252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:46.441 [2024-04-24 16:06:47.526336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:46.441 Initializing NVMe Controllers 00:10:46.441 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:46.441 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:46.441 Namespace ID: 1 size: 0GB 00:10:46.441 Initialization complete. 00:10:46.441 INFO: using host memory buffer for IO 00:10:46.441 Hello world! 00:10:46.441 16:06:47 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:46.441 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.699 [2024-04-24 16:06:47.818543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:48.075 Initializing NVMe Controllers 00:10:48.075 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.075 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.075 Initialization complete. Launching workers. 00:10:48.075 submit (in ns) avg, min, max = 9651.7, 3506.7, 6993558.9 00:10:48.075 complete (in ns) avg, min, max = 25495.5, 2026.7, 4024452.2 00:10:48.075 00:10:48.075 Submit histogram 00:10:48.075 ================ 00:10:48.075 Range in us Cumulative Count 00:10:48.075 3.484 - 3.508: 0.0073% ( 1) 00:10:48.076 3.508 - 3.532: 0.9183% ( 125) 00:10:48.076 3.532 - 3.556: 4.6644% ( 514) 00:10:48.076 3.556 - 3.579: 14.4887% ( 1348) 00:10:48.076 3.579 - 3.603: 24.8233% ( 1418) 00:10:48.076 3.603 - 3.627: 36.2073% ( 1562) 00:10:48.076 3.627 - 3.650: 45.4413% ( 1267) 00:10:48.076 3.650 - 3.674: 52.3577% ( 949) 00:10:48.076 3.674 - 3.698: 57.3646% ( 687) 00:10:48.076 3.698 - 3.721: 61.4678% ( 563) 00:10:48.076 3.721 - 3.745: 64.8203% ( 460) 00:10:48.076 3.745 - 3.769: 67.5024% ( 368) 00:10:48.076 3.769 - 3.793: 70.4540% ( 405) 00:10:48.076 3.793 - 3.816: 74.0908% ( 499) 00:10:48.076 3.816 - 3.840: 78.6604% ( 627) 00:10:48.076 3.840 - 3.864: 83.0114% ( 597) 00:10:48.076 3.864 - 3.887: 85.6424% ( 361) 00:10:48.076 3.887 - 3.911: 87.4863% ( 253) 00:10:48.076 3.911 - 3.935: 89.2355% ( 240) 00:10:48.076 3.935 - 3.959: 90.8024% ( 215) 00:10:48.076 3.959 - 3.982: 91.9467% ( 157) 00:10:48.076 3.982 - 4.006: 92.8212% ( 120) 00:10:48.076 4.006 - 4.030: 93.5792% ( 104) 00:10:48.076 4.030 - 4.053: 94.5339% ( 131) 00:10:48.076 4.053 - 4.077: 95.2482% ( 98) 00:10:48.076 4.077 - 4.101: 95.8749% ( 86) 00:10:48.076 4.101 - 4.124: 96.2102% ( 46) 00:10:48.076 4.124 - 4.148: 96.4070% ( 27) 00:10:48.076 4.148 - 4.172: 96.6110% ( 28) 00:10:48.076 4.172 - 4.196: 96.7787% ( 23) 00:10:48.076 4.196 - 4.219: 96.8880% ( 15) 00:10:48.076 4.219 - 4.243: 96.9827% ( 13) 00:10:48.076 4.243 - 4.267: 97.0848% ( 14) 00:10:48.076 4.267 - 4.290: 97.1868% ( 14) 00:10:48.076 4.290 - 4.314: 97.2597% ( 10) 00:10:48.076 4.314 - 4.338: 97.3253% ( 9) 00:10:48.076 4.338 - 4.361: 97.3544% ( 4) 00:10:48.076 4.361 - 4.385: 97.3836% ( 4) 00:10:48.076 4.385 - 4.409: 97.4127% ( 4) 00:10:48.076 4.409 - 4.433: 97.4273% ( 2) 00:10:48.076 4.456 - 4.480: 97.4346% ( 1) 00:10:48.076 4.480 - 4.504: 97.4565% ( 3) 00:10:48.076 4.527 - 4.551: 97.4637% ( 1) 00:10:48.076 4.575 - 4.599: 97.4710% ( 1) 00:10:48.076 4.599 - 4.622: 97.4856% ( 2) 00:10:48.076 4.622 - 4.646: 97.5075% ( 3) 00:10:48.076 4.646 - 4.670: 97.5220% ( 2) 00:10:48.076 4.670 - 4.693: 97.5366% ( 2) 00:10:48.076 4.693 - 4.717: 97.5585% ( 3) 00:10:48.076 4.717 - 4.741: 97.5949% ( 5) 00:10:48.076 4.741 - 4.764: 97.6241% ( 4) 00:10:48.076 4.764 - 4.788: 97.7042% ( 11) 00:10:48.076 4.788 - 4.812: 97.7626% ( 8) 00:10:48.076 4.812 - 4.836: 97.8063% ( 6) 00:10:48.076 4.836 - 4.859: 97.8427% ( 5) 00:10:48.076 4.859 - 4.883: 97.8719% ( 4) 00:10:48.076 4.883 - 4.907: 97.8865% ( 2) 00:10:48.076 4.907 - 4.930: 97.9375% ( 7) 00:10:48.076 4.930 - 4.954: 97.9448% ( 1) 00:10:48.076 4.954 - 4.978: 97.9666% ( 3) 00:10:48.076 4.978 - 5.001: 97.9958% ( 4) 00:10:48.076 5.001 - 5.025: 98.0322% ( 5) 00:10:48.076 5.025 - 5.049: 98.0395% ( 1) 00:10:48.076 5.049 - 5.073: 98.0468% ( 1) 00:10:48.076 5.073 - 5.096: 98.0759% ( 4) 00:10:48.076 5.096 - 5.120: 98.0832% ( 1) 00:10:48.076 5.144 - 5.167: 98.1124% ( 4) 00:10:48.076 5.167 - 5.191: 98.1197% ( 1) 00:10:48.076 5.191 - 5.215: 98.1270% ( 1) 00:10:48.076 5.215 - 5.239: 98.1415% ( 2) 00:10:48.076 5.239 - 5.262: 98.1488% ( 1) 00:10:48.076 5.262 - 5.286: 98.1634% ( 2) 00:10:48.076 5.286 - 5.310: 98.1707% ( 1) 00:10:48.076 5.357 - 5.381: 98.1853% ( 2) 00:10:48.076 5.404 - 5.428: 98.1926% ( 1) 00:10:48.076 5.428 - 5.452: 98.1998% ( 1) 00:10:48.076 5.452 - 5.476: 98.2071% ( 1) 00:10:48.076 5.476 - 5.499: 98.2144% ( 1) 00:10:48.076 5.641 - 5.665: 98.2217% ( 1) 00:10:48.076 5.713 - 5.736: 98.2290% ( 1) 00:10:48.076 5.855 - 5.879: 98.2363% ( 1) 00:10:48.076 5.879 - 5.902: 98.2436% ( 1) 00:10:48.076 5.926 - 5.950: 98.2509% ( 1) 00:10:48.076 5.950 - 5.973: 98.2581% ( 1) 00:10:48.076 5.973 - 5.997: 98.2727% ( 2) 00:10:48.076 5.997 - 6.021: 98.2800% ( 1) 00:10:48.076 6.021 - 6.044: 98.2873% ( 1) 00:10:48.076 6.163 - 6.210: 98.2946% ( 1) 00:10:48.076 6.305 - 6.353: 98.3019% ( 1) 00:10:48.076 6.353 - 6.400: 98.3383% ( 5) 00:10:48.076 6.400 - 6.447: 98.3529% ( 2) 00:10:48.076 6.447 - 6.495: 98.3675% ( 2) 00:10:48.076 6.590 - 6.637: 98.3748% ( 1) 00:10:48.076 6.684 - 6.732: 98.3820% ( 1) 00:10:48.076 6.779 - 6.827: 98.3966% ( 2) 00:10:48.076 6.874 - 6.921: 98.4039% ( 1) 00:10:48.076 6.921 - 6.969: 98.4185% ( 2) 00:10:48.076 6.969 - 7.016: 98.4258% ( 1) 00:10:48.076 7.016 - 7.064: 98.4476% ( 3) 00:10:48.076 7.111 - 7.159: 98.4695% ( 3) 00:10:48.076 7.206 - 7.253: 98.4841% ( 2) 00:10:48.076 7.253 - 7.301: 98.4914% ( 1) 00:10:48.076 7.301 - 7.348: 98.5059% ( 2) 00:10:48.076 7.443 - 7.490: 98.5205% ( 2) 00:10:48.076 7.490 - 7.538: 98.5278% ( 1) 00:10:48.076 7.585 - 7.633: 98.5424% ( 2) 00:10:48.076 7.633 - 7.680: 98.5497% ( 1) 00:10:48.076 7.680 - 7.727: 98.5570% ( 1) 00:10:48.076 7.727 - 7.775: 98.5642% ( 1) 00:10:48.076 7.822 - 7.870: 98.5715% ( 1) 00:10:48.076 7.870 - 7.917: 98.5861% ( 2) 00:10:48.076 7.917 - 7.964: 98.5934% ( 1) 00:10:48.076 7.964 - 8.012: 98.6153% ( 3) 00:10:48.076 8.012 - 8.059: 98.6225% ( 1) 00:10:48.076 8.059 - 8.107: 98.6298% ( 1) 00:10:48.076 8.154 - 8.201: 98.6371% ( 1) 00:10:48.076 8.344 - 8.391: 98.6444% ( 1) 00:10:48.076 8.628 - 8.676: 98.6517% ( 1) 00:10:48.077 8.676 - 8.723: 98.6590% ( 1) 00:10:48.077 8.770 - 8.818: 98.6663% ( 1) 00:10:48.077 8.865 - 8.913: 98.6736% ( 1) 00:10:48.077 8.913 - 8.960: 98.6809% ( 1) 00:10:48.077 9.055 - 9.102: 98.6881% ( 1) 00:10:48.077 9.150 - 9.197: 98.6954% ( 1) 00:10:48.077 9.292 - 9.339: 98.7027% ( 1) 00:10:48.077 9.624 - 9.671: 98.7173% ( 2) 00:10:48.077 9.671 - 9.719: 98.7246% ( 1) 00:10:48.077 9.908 - 9.956: 98.7319% ( 1) 00:10:48.077 10.050 - 10.098: 98.7392% ( 1) 00:10:48.077 10.382 - 10.430: 98.7464% ( 1) 00:10:48.077 10.430 - 10.477: 98.7537% ( 1) 00:10:48.077 10.951 - 10.999: 98.7610% ( 1) 00:10:48.077 10.999 - 11.046: 98.7683% ( 1) 00:10:48.077 11.093 - 11.141: 98.7756% ( 1) 00:10:48.077 11.141 - 11.188: 98.7829% ( 1) 00:10:48.077 11.236 - 11.283: 98.7902% ( 1) 00:10:48.077 11.283 - 11.330: 98.7975% ( 1) 00:10:48.077 11.330 - 11.378: 98.8048% ( 1) 00:10:48.077 11.710 - 11.757: 98.8120% ( 1) 00:10:48.077 11.852 - 11.899: 98.8193% ( 1) 00:10:48.077 11.899 - 11.947: 98.8266% ( 1) 00:10:48.077 12.136 - 12.231: 98.8339% ( 1) 00:10:48.077 12.421 - 12.516: 98.8485% ( 2) 00:10:48.077 12.610 - 12.705: 98.8558% ( 1) 00:10:48.077 12.800 - 12.895: 98.8631% ( 1) 00:10:48.077 12.990 - 13.084: 98.8776% ( 2) 00:10:48.077 13.084 - 13.179: 98.8922% ( 2) 00:10:48.077 13.274 - 13.369: 98.8995% ( 1) 00:10:48.077 13.464 - 13.559: 98.9141% ( 2) 00:10:48.077 13.559 - 13.653: 98.9214% ( 1) 00:10:48.077 13.748 - 13.843: 98.9359% ( 2) 00:10:48.077 13.938 - 14.033: 98.9432% ( 1) 00:10:48.077 14.033 - 14.127: 98.9505% ( 1) 00:10:48.077 14.222 - 14.317: 98.9578% ( 1) 00:10:48.077 14.317 - 14.412: 98.9724% ( 2) 00:10:48.077 14.601 - 14.696: 98.9797% ( 1) 00:10:48.077 15.644 - 15.739: 98.9870% ( 1) 00:10:48.077 15.739 - 15.834: 98.9942% ( 1) 00:10:48.077 16.972 - 17.067: 99.0015% ( 1) 00:10:48.077 17.067 - 17.161: 99.0088% ( 1) 00:10:48.077 17.161 - 17.256: 99.0161% ( 1) 00:10:48.077 17.256 - 17.351: 99.0234% ( 1) 00:10:48.077 17.351 - 17.446: 99.0380% ( 2) 00:10:48.077 17.446 - 17.541: 99.0598% ( 3) 00:10:48.077 17.541 - 17.636: 99.0963% ( 5) 00:10:48.077 17.636 - 17.730: 99.1327% ( 5) 00:10:48.077 17.730 - 17.825: 99.1764% ( 6) 00:10:48.077 17.825 - 17.920: 99.2129% ( 5) 00:10:48.077 17.920 - 18.015: 99.2858% ( 10) 00:10:48.077 18.015 - 18.110: 99.3149% ( 4) 00:10:48.077 18.110 - 18.204: 99.3951% ( 11) 00:10:48.077 18.204 - 18.299: 99.4825% ( 12) 00:10:48.077 18.299 - 18.394: 99.5408% ( 8) 00:10:48.077 18.394 - 18.489: 99.5992% ( 8) 00:10:48.077 18.489 - 18.584: 99.6429% ( 6) 00:10:48.077 18.584 - 18.679: 99.6647% ( 3) 00:10:48.077 18.679 - 18.773: 99.6866% ( 3) 00:10:48.077 18.868 - 18.963: 99.7158% ( 4) 00:10:48.077 18.963 - 19.058: 99.7231% ( 1) 00:10:48.077 19.058 - 19.153: 99.7303% ( 1) 00:10:48.077 19.153 - 19.247: 99.7376% ( 1) 00:10:48.077 19.247 - 19.342: 99.7522% ( 2) 00:10:48.077 19.342 - 19.437: 99.7741% ( 3) 00:10:48.077 19.627 - 19.721: 99.7814% ( 1) 00:10:48.077 19.721 - 19.816: 99.7886% ( 1) 00:10:48.077 19.816 - 19.911: 99.8032% ( 2) 00:10:48.077 20.290 - 20.385: 99.8105% ( 1) 00:10:48.077 20.480 - 20.575: 99.8178% ( 1) 00:10:48.077 21.144 - 21.239: 99.8251% ( 1) 00:10:48.077 21.239 - 21.333: 99.8324% ( 1) 00:10:48.077 22.566 - 22.661: 99.8397% ( 1) 00:10:48.077 25.410 - 25.600: 99.8469% ( 1) 00:10:48.077 26.169 - 26.359: 99.8542% ( 1) 00:10:48.077 30.530 - 30.720: 99.8615% ( 1) 00:10:48.077 1013.381 - 1019.449: 99.8688% ( 1) 00:10:48.077 3980.705 - 4004.978: 99.9490% ( 11) 00:10:48.077 4004.978 - 4029.250: 99.9854% ( 5) 00:10:48.077 6990.507 - 7039.052: 100.0000% ( 2) 00:10:48.077 00:10:48.077 Complete histogram 00:10:48.077 ================== 00:10:48.077 Range in us Cumulative Count 00:10:48.077 2.027 - 2.039: 5.1745% ( 710) 00:10:48.077 2.039 - 2.050: 12.4845% ( 1003) 00:10:48.077 2.050 - 2.062: 15.2175% ( 375) 00:10:48.077 2.062 - 2.074: 44.7999% ( 4059) 00:10:48.077 2.074 - 2.086: 59.6968% ( 2044) 00:10:48.077 2.086 - 2.098: 61.8541% ( 296) 00:10:48.077 2.098 - 2.110: 65.3961% ( 486) 00:10:48.077 2.110 - 2.121: 66.9193% ( 209) 00:10:48.077 2.121 - 2.133: 69.9730% ( 419) 00:10:48.077 2.133 - 2.145: 80.4169% ( 1433) 00:10:48.077 2.145 - 2.157: 84.1557% ( 513) 00:10:48.077 2.157 - 2.169: 85.0011% ( 116) 00:10:48.077 2.169 - 2.181: 86.4004% ( 192) 00:10:48.077 2.181 - 2.193: 87.3843% ( 135) 00:10:48.077 2.193 - 2.204: 88.4192% ( 142) 00:10:48.077 2.204 - 2.216: 92.2819% ( 530) 00:10:48.077 2.216 - 2.228: 94.3080% ( 278) 00:10:48.077 2.228 - 2.240: 94.5995% ( 40) 00:10:48.077 2.240 - 2.252: 94.8182% ( 30) 00:10:48.077 2.252 - 2.264: 95.0587% ( 33) 00:10:48.077 2.264 - 2.276: 95.1899% ( 18) 00:10:48.077 2.276 - 2.287: 95.4887% ( 41) 00:10:48.077 2.287 - 2.299: 95.7948% ( 42) 00:10:48.077 2.299 - 2.311: 95.9332% ( 19) 00:10:48.077 2.311 - 2.323: 96.1009% ( 23) 00:10:48.077 2.323 - 2.335: 96.3705% ( 37) 00:10:48.077 2.335 - 2.347: 96.6766% ( 42) 00:10:48.077 2.347 - 2.359: 97.0265% ( 48) 00:10:48.077 2.359 - 2.370: 97.3690% ( 47) 00:10:48.077 2.370 - 2.382: 97.6751% ( 42) 00:10:48.077 2.382 - 2.394: 97.8937% ( 30) 00:10:48.077 2.394 - 2.406: 97.9958% ( 14) 00:10:48.078 2.406 - 2.418: 98.0832% ( 12) 00:10:48.078 2.418 - 2.430: 98.1561% ( 10) 00:10:48.078 2.430 - 2.441: 98.2654% ( 15) 00:10:48.078 2.441 - 2.453: 98.3310% ( 9) 00:10:48.078 2.453 - 2.465: 98.3748% ( 6) 00:10:48.078 2.465 - 2.477: 98.4331% ( 8) 00:10:48.078 2.477 - 2.489: 98.4622% ( 4) 00:10:48.078 2.489 - 2.501: 98.4914% ( 4) 00:10:48.078 2.501 - 2.513: 98.5059% ( 2) 00:10:48.078 2.524 - 2.536: 98.5132% ( 1) 00:10:48.078 2.536 - 2.548: 98.5205% ( 1) 00:10:48.078 2.548 - 2.560: 98.5278% ( 1) 00:10:48.078 2.560 - 2.572: 98.5351% ( 1) 00:10:48.078 2.572 - 2.584: 98.5570% ( 3) 00:10:48.078 2.584 - 2.596: 98.5642% ( 1) 00:10:48.078 2.596 - 2.607: 98.5861% ( 3) 00:10:48.078 2.607 - 2.619: 98.5934% ( 1) 00:10:48.078 2.655 - 2.667: 98.6007% ( 1) 00:10:48.078 2.690 - 2.702: 98.6080% ( 1) 00:10:48.078 2.750 - 2.761: 98.6153% ( 1) 00:10:48.078 2.761 - 2.773: 98.6225% ( 1) 00:10:48.078 2.785 - 2.797: 9[2024-04-24 16:06:48.927557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:48.078 8.6298% ( 1) 00:10:48.078 3.176 - 3.200: 98.6371% ( 1) 00:10:48.078 3.224 - 3.247: 98.6444% ( 1) 00:10:48.078 3.271 - 3.295: 98.6517% ( 1) 00:10:48.078 3.295 - 3.319: 98.6590% ( 1) 00:10:48.078 3.342 - 3.366: 98.6663% ( 1) 00:10:48.078 3.366 - 3.390: 98.6736% ( 1) 00:10:48.078 3.413 - 3.437: 98.6954% ( 3) 00:10:48.078 3.437 - 3.461: 98.7027% ( 1) 00:10:48.078 3.508 - 3.532: 98.7246% ( 3) 00:10:48.078 3.532 - 3.556: 98.7319% ( 1) 00:10:48.078 3.556 - 3.579: 98.7392% ( 1) 00:10:48.078 3.603 - 3.627: 98.7537% ( 2) 00:10:48.078 3.650 - 3.674: 98.7610% ( 1) 00:10:48.078 3.674 - 3.698: 98.7683% ( 1) 00:10:48.078 3.745 - 3.769: 98.7756% ( 1) 00:10:48.078 3.769 - 3.793: 98.7829% ( 1) 00:10:48.078 5.120 - 5.144: 98.7902% ( 1) 00:10:48.078 5.239 - 5.262: 98.7975% ( 1) 00:10:48.078 5.262 - 5.286: 98.8048% ( 1) 00:10:48.078 5.310 - 5.333: 98.8120% ( 1) 00:10:48.078 5.428 - 5.452: 98.8193% ( 1) 00:10:48.078 5.547 - 5.570: 98.8266% ( 1) 00:10:48.078 5.713 - 5.736: 98.8339% ( 1) 00:10:48.078 5.879 - 5.902: 98.8412% ( 1) 00:10:48.078 5.926 - 5.950: 98.8558% ( 2) 00:10:48.078 5.973 - 5.997: 98.8631% ( 1) 00:10:48.078 6.068 - 6.116: 98.8703% ( 1) 00:10:48.078 6.542 - 6.590: 98.8776% ( 1) 00:10:48.078 6.827 - 6.874: 98.8849% ( 1) 00:10:48.078 7.585 - 7.633: 98.8922% ( 1) 00:10:48.078 8.296 - 8.344: 98.8995% ( 1) 00:10:48.078 15.170 - 15.265: 98.9068% ( 1) 00:10:48.078 15.360 - 15.455: 98.9141% ( 1) 00:10:48.078 15.455 - 15.550: 98.9286% ( 2) 00:10:48.078 15.644 - 15.739: 98.9359% ( 1) 00:10:48.078 15.739 - 15.834: 98.9505% ( 2) 00:10:48.078 15.834 - 15.929: 98.9724% ( 3) 00:10:48.078 15.929 - 16.024: 99.0088% ( 5) 00:10:48.078 16.024 - 16.119: 99.0453% ( 5) 00:10:48.078 16.119 - 16.213: 99.0598% ( 2) 00:10:48.078 16.213 - 16.308: 99.1036% ( 6) 00:10:48.078 16.403 - 16.498: 99.1181% ( 2) 00:10:48.078 16.498 - 16.593: 99.1473% ( 4) 00:10:48.078 16.593 - 16.687: 99.1764% ( 4) 00:10:48.078 16.687 - 16.782: 99.2202% ( 6) 00:10:48.078 16.782 - 16.877: 99.2493% ( 4) 00:10:48.078 16.877 - 16.972: 99.2858% ( 5) 00:10:48.078 16.972 - 17.067: 99.3076% ( 3) 00:10:48.078 17.256 - 17.351: 99.3149% ( 1) 00:10:48.078 17.351 - 17.446: 99.3222% ( 1) 00:10:48.078 17.541 - 17.636: 99.3295% ( 1) 00:10:48.078 17.730 - 17.825: 99.3441% ( 2) 00:10:48.078 17.920 - 18.015: 99.3586% ( 2) 00:10:48.078 18.394 - 18.489: 99.3659% ( 1) 00:10:48.078 18.679 - 18.773: 99.3732% ( 1) 00:10:48.078 19.247 - 19.342: 99.3805% ( 1) 00:10:48.078 19.437 - 19.532: 99.3878% ( 1) 00:10:48.078 19.532 - 19.627: 99.3951% ( 1) 00:10:48.078 20.101 - 20.196: 99.4024% ( 1) 00:10:48.078 23.893 - 23.988: 99.4097% ( 1) 00:10:48.078 1031.585 - 1037.653: 99.4170% ( 1) 00:10:48.078 3009.801 - 3021.938: 99.4242% ( 1) 00:10:48.078 3689.434 - 3713.707: 99.4315% ( 1) 00:10:48.078 3980.705 - 4004.978: 99.8688% ( 60) 00:10:48.078 4004.978 - 4029.250: 100.0000% ( 18) 00:10:48.078 00:10:48.078 16:06:48 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:48.078 16:06:48 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:48.078 16:06:48 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:48.078 16:06:48 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:48.078 16:06:48 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:48.078 [ 00:10:48.078 { 00:10:48.078 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:48.078 "subtype": "Discovery", 00:10:48.078 "listen_addresses": [], 00:10:48.078 "allow_any_host": true, 00:10:48.078 "hosts": [] 00:10:48.078 }, 00:10:48.078 { 00:10:48.078 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:48.078 "subtype": "NVMe", 00:10:48.078 "listen_addresses": [ 00:10:48.078 { 00:10:48.078 "transport": "VFIOUSER", 00:10:48.078 "trtype": "VFIOUSER", 00:10:48.078 "adrfam": "IPv4", 00:10:48.078 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:48.078 "trsvcid": "0" 00:10:48.078 } 00:10:48.078 ], 00:10:48.078 "allow_any_host": true, 00:10:48.078 "hosts": [], 00:10:48.078 "serial_number": "SPDK1", 00:10:48.079 "model_number": "SPDK bdev Controller", 00:10:48.079 "max_namespaces": 32, 00:10:48.079 "min_cntlid": 1, 00:10:48.079 "max_cntlid": 65519, 00:10:48.079 "namespaces": [ 00:10:48.079 { 00:10:48.079 "nsid": 1, 00:10:48.079 "bdev_name": "Malloc1", 00:10:48.079 "name": "Malloc1", 00:10:48.079 "nguid": "A308981BFCAD472AA27AB3D8CCC7D66E", 00:10:48.079 "uuid": "a308981b-fcad-472a-a27a-b3d8ccc7d66e" 00:10:48.079 }, 00:10:48.079 { 00:10:48.079 "nsid": 2, 00:10:48.079 "bdev_name": "Malloc3", 00:10:48.079 "name": "Malloc3", 00:10:48.079 "nguid": "88337F384BD9467FBE6A0D816B818147", 00:10:48.079 "uuid": "88337f38-4bd9-467f-be6a-0d816b818147" 00:10:48.079 } 00:10:48.079 ] 00:10:48.079 }, 00:10:48.079 { 00:10:48.079 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:48.079 "subtype": "NVMe", 00:10:48.079 "listen_addresses": [ 00:10:48.079 { 00:10:48.079 "transport": "VFIOUSER", 00:10:48.079 "trtype": "VFIOUSER", 00:10:48.079 "adrfam": "IPv4", 00:10:48.079 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:48.079 "trsvcid": "0" 00:10:48.079 } 00:10:48.079 ], 00:10:48.079 "allow_any_host": true, 00:10:48.079 "hosts": [], 00:10:48.079 "serial_number": "SPDK2", 00:10:48.079 "model_number": "SPDK bdev Controller", 00:10:48.079 "max_namespaces": 32, 00:10:48.079 "min_cntlid": 1, 00:10:48.079 "max_cntlid": 65519, 00:10:48.079 "namespaces": [ 00:10:48.079 { 00:10:48.079 "nsid": 1, 00:10:48.079 "bdev_name": "Malloc2", 00:10:48.079 "name": "Malloc2", 00:10:48.079 "nguid": "A61AC55A39A54D53BA0059CEFAC7D2A0", 00:10:48.079 "uuid": "a61ac55a-39a5-4d53-ba00-59cefac7d2a0" 00:10:48.079 } 00:10:48.079 ] 00:10:48.079 } 00:10:48.079 ] 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3354631 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:48.079 16:06:49 -- common/autotest_common.sh@1251 -- # local i=0 00:10:48.079 16:06:49 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:48.079 16:06:49 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:48.079 16:06:49 -- common/autotest_common.sh@1262 -- # return 0 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:48.079 16:06:49 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:48.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.338 [2024-04-24 16:06:49.410226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:48.338 Malloc4 00:10:48.338 16:06:49 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:48.631 [2024-04-24 16:06:49.755665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:48.631 16:06:49 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:48.631 Asynchronous Event Request test 00:10:48.631 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.631 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.631 Registering asynchronous event callbacks... 00:10:48.631 Starting namespace attribute notice tests for all controllers... 00:10:48.631 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:48.631 aer_cb - Changed Namespace 00:10:48.631 Cleaning up... 00:10:48.920 [ 00:10:48.920 { 00:10:48.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:48.920 "subtype": "Discovery", 00:10:48.920 "listen_addresses": [], 00:10:48.920 "allow_any_host": true, 00:10:48.920 "hosts": [] 00:10:48.920 }, 00:10:48.920 { 00:10:48.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:48.920 "subtype": "NVMe", 00:10:48.920 "listen_addresses": [ 00:10:48.920 { 00:10:48.920 "transport": "VFIOUSER", 00:10:48.920 "trtype": "VFIOUSER", 00:10:48.920 "adrfam": "IPv4", 00:10:48.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:48.920 "trsvcid": "0" 00:10:48.920 } 00:10:48.920 ], 00:10:48.920 "allow_any_host": true, 00:10:48.920 "hosts": [], 00:10:48.920 "serial_number": "SPDK1", 00:10:48.920 "model_number": "SPDK bdev Controller", 00:10:48.920 "max_namespaces": 32, 00:10:48.920 "min_cntlid": 1, 00:10:48.920 "max_cntlid": 65519, 00:10:48.920 "namespaces": [ 00:10:48.920 { 00:10:48.920 "nsid": 1, 00:10:48.920 "bdev_name": "Malloc1", 00:10:48.920 "name": "Malloc1", 00:10:48.920 "nguid": "A308981BFCAD472AA27AB3D8CCC7D66E", 00:10:48.920 "uuid": "a308981b-fcad-472a-a27a-b3d8ccc7d66e" 00:10:48.920 }, 00:10:48.920 { 00:10:48.920 "nsid": 2, 00:10:48.920 "bdev_name": "Malloc3", 00:10:48.920 "name": "Malloc3", 00:10:48.920 "nguid": "88337F384BD9467FBE6A0D816B818147", 00:10:48.920 "uuid": "88337f38-4bd9-467f-be6a-0d816b818147" 00:10:48.920 } 00:10:48.920 ] 00:10:48.920 }, 00:10:48.920 { 00:10:48.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:48.920 "subtype": "NVMe", 00:10:48.920 "listen_addresses": [ 00:10:48.920 { 00:10:48.920 "transport": "VFIOUSER", 00:10:48.920 "trtype": "VFIOUSER", 00:10:48.920 "adrfam": "IPv4", 00:10:48.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:48.920 "trsvcid": "0" 00:10:48.920 } 00:10:48.920 ], 00:10:48.920 "allow_any_host": true, 00:10:48.920 "hosts": [], 00:10:48.920 "serial_number": "SPDK2", 00:10:48.920 "model_number": "SPDK bdev Controller", 00:10:48.920 "max_namespaces": 32, 00:10:48.920 "min_cntlid": 1, 00:10:48.920 "max_cntlid": 65519, 00:10:48.920 "namespaces": [ 00:10:48.920 { 00:10:48.920 "nsid": 1, 00:10:48.920 "bdev_name": "Malloc2", 00:10:48.920 "name": "Malloc2", 00:10:48.920 "nguid": "A61AC55A39A54D53BA0059CEFAC7D2A0", 00:10:48.920 "uuid": "a61ac55a-39a5-4d53-ba00-59cefac7d2a0" 00:10:48.920 }, 00:10:48.920 { 00:10:48.920 "nsid": 2, 00:10:48.920 "bdev_name": "Malloc4", 00:10:48.920 "name": "Malloc4", 00:10:48.920 "nguid": "DD526792E7A24C17A6FEE2AEAAAEADBE", 00:10:48.920 "uuid": "dd526792-e7a2-4c17-a6fe-e2aeaaaeadbe" 00:10:48.920 } 00:10:48.920 ] 00:10:48.920 } 00:10:48.920 ] 00:10:48.920 16:06:50 -- target/nvmf_vfio_user.sh@44 -- # wait 3354631 00:10:48.920 16:06:50 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:48.920 16:06:50 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3348527 00:10:48.920 16:06:50 -- common/autotest_common.sh@936 -- # '[' -z 3348527 ']' 00:10:48.920 16:06:50 -- common/autotest_common.sh@940 -- # kill -0 3348527 00:10:48.920 16:06:50 -- common/autotest_common.sh@941 -- # uname 00:10:48.920 16:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:48.920 16:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3348527 00:10:48.920 16:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:48.920 16:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:48.920 16:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3348527' 00:10:48.920 killing process with pid 3348527 00:10:48.920 16:06:50 -- common/autotest_common.sh@955 -- # kill 3348527 00:10:48.920 [2024-04-24 16:06:50.046128] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:48.920 16:06:50 -- common/autotest_common.sh@960 -- # wait 3348527 00:10:49.178 16:06:50 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:49.178 16:06:50 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3354784 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3354784' 00:10:49.179 Process pid: 3354784 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:49.179 16:06:50 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3354784 00:10:49.179 16:06:50 -- common/autotest_common.sh@817 -- # '[' -z 3354784 ']' 00:10:49.179 16:06:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.179 16:06:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.179 16:06:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.179 16:06:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.179 16:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:49.438 [2024-04-24 16:06:50.475506] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:49.438 [2024-04-24 16:06:50.476765] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:10:49.438 [2024-04-24 16:06:50.476836] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.438 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.438 [2024-04-24 16:06:50.542695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.438 [2024-04-24 16:06:50.654273] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.438 [2024-04-24 16:06:50.654329] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.438 [2024-04-24 16:06:50.654345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.438 [2024-04-24 16:06:50.654360] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.438 [2024-04-24 16:06:50.654372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.438 [2024-04-24 16:06:50.654473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.438 [2024-04-24 16:06:50.654538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.438 [2024-04-24 16:06:50.654632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.438 [2024-04-24 16:06:50.654635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.697 [2024-04-24 16:06:50.763793] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:49.697 [2024-04-24 16:06:50.764015] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:49.697 [2024-04-24 16:06:50.764339] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:49.697 [2024-04-24 16:06:50.765160] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:49.697 [2024-04-24 16:06:50.765290] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:50.262 16:06:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.262 16:06:51 -- common/autotest_common.sh@850 -- # return 0 00:10:50.262 16:06:51 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:51.198 16:06:52 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:51.456 16:06:52 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:51.456 16:06:52 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:51.456 16:06:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:51.456 16:06:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:51.715 16:06:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:51.715 Malloc1 00:10:51.975 16:06:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:51.975 16:06:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:52.234 16:06:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:52.492 16:06:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:52.492 16:06:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:52.492 16:06:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:52.750 Malloc2 00:10:52.750 16:06:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:53.008 16:06:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:53.266 16:06:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:53.524 16:06:54 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:53.524 16:06:54 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3354784 00:10:53.524 16:06:54 -- common/autotest_common.sh@936 -- # '[' -z 3354784 ']' 00:10:53.524 16:06:54 -- common/autotest_common.sh@940 -- # kill -0 3354784 00:10:53.524 16:06:54 -- common/autotest_common.sh@941 -- # uname 00:10:53.524 16:06:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.524 16:06:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3354784 00:10:53.524 16:06:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.524 16:06:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.524 16:06:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3354784' 00:10:53.524 killing process with pid 3354784 00:10:53.524 16:06:54 -- common/autotest_common.sh@955 -- # kill 3354784 00:10:53.524 16:06:54 -- common/autotest_common.sh@960 -- # wait 3354784 00:10:53.782 [2024-04-24 16:06:54.938436] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:10:54.041 16:06:55 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:54.041 16:06:55 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:54.041 00:10:54.041 real 0m53.148s 00:10:54.041 user 3m29.676s 00:10:54.041 sys 0m4.596s 00:10:54.041 16:06:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.041 16:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.041 ************************************ 00:10:54.041 END TEST nvmf_vfio_user 00:10:54.041 ************************************ 00:10:54.041 16:06:55 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:54.041 16:06:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:54.041 16:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.041 16:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.041 ************************************ 00:10:54.041 START TEST nvmf_vfio_user_nvme_compliance 00:10:54.041 ************************************ 00:10:54.042 16:06:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:54.042 * Looking for test storage... 00:10:54.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:54.042 16:06:55 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.042 16:06:55 -- nvmf/common.sh@7 -- # uname -s 00:10:54.042 16:06:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.042 16:06:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.042 16:06:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.042 16:06:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.042 16:06:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.042 16:06:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.042 16:06:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.042 16:06:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.042 16:06:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.042 16:06:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.042 16:06:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:54.042 16:06:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:54.042 16:06:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.042 16:06:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.042 16:06:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.042 16:06:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.042 16:06:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.042 16:06:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.042 16:06:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.042 16:06:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.042 16:06:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 16:06:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 16:06:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 16:06:55 -- paths/export.sh@5 -- # export PATH 00:10:54.042 16:06:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 16:06:55 -- nvmf/common.sh@47 -- # : 0 00:10:54.042 16:06:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.042 16:06:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.042 16:06:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.042 16:06:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.042 16:06:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.042 16:06:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.042 16:06:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.042 16:06:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.042 16:06:55 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.042 16:06:55 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.042 16:06:55 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:54.042 16:06:55 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:54.042 16:06:55 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:54.042 16:06:55 -- compliance/compliance.sh@20 -- # nvmfpid=3355398 00:10:54.042 16:06:55 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:54.042 16:06:55 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3355398' 00:10:54.042 Process pid: 3355398 00:10:54.042 16:06:55 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:54.042 16:06:55 -- compliance/compliance.sh@24 -- # waitforlisten 3355398 00:10:54.042 16:06:55 -- common/autotest_common.sh@817 -- # '[' -z 3355398 ']' 00:10:54.042 16:06:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.042 16:06:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.042 16:06:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.042 16:06:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.042 16:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.042 [2024-04-24 16:06:55.319299] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:10:54.042 [2024-04-24 16:06:55.319385] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.302 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.302 [2024-04-24 16:06:55.380056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.302 [2024-04-24 16:06:55.486453] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.302 [2024-04-24 16:06:55.486504] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.302 [2024-04-24 16:06:55.486532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.302 [2024-04-24 16:06:55.486545] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.302 [2024-04-24 16:06:55.486555] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.302 [2024-04-24 16:06:55.486697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.302 [2024-04-24 16:06:55.486762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.302 [2024-04-24 16:06:55.486767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.562 16:06:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:54.562 16:06:55 -- common/autotest_common.sh@850 -- # return 0 00:10:54.562 16:06:55 -- compliance/compliance.sh@26 -- # sleep 1 00:10:55.502 16:06:56 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:55.502 16:06:56 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:55.502 16:06:56 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:55.502 16:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.502 16:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:55.502 16:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.502 16:06:56 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:55.502 16:06:56 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:55.502 16:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.502 16:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:55.502 malloc0 00:10:55.502 16:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.502 16:06:56 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:55.502 16:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.502 16:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:55.502 16:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.502 16:06:56 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:55.502 16:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.502 16:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:55.502 16:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.502 16:06:56 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:55.502 16:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.502 16:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:55.502 16:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.502 16:06:56 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:55.502 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.762 00:10:55.762 00:10:55.762 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.762 http://cunit.sourceforge.net/ 00:10:55.762 00:10:55.762 00:10:55.762 Suite: nvme_compliance 00:10:55.762 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 16:06:56.851290] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:55.762 [2024-04-24 16:06:56.852677] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:55.762 [2024-04-24 16:06:56.852700] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:55.762 [2024-04-24 16:06:56.852727] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:55.762 [2024-04-24 16:06:56.854305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:55.762 passed 00:10:55.762 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 16:06:56.938859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:55.762 [2024-04-24 16:06:56.941883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:55.762 passed 00:10:55.762 Test: admin_identify_ns ...[2024-04-24 16:06:57.027329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.020 [2024-04-24 16:06:57.090763] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:56.020 [2024-04-24 16:06:57.098756] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:56.020 [2024-04-24 16:06:57.119895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.020 passed 00:10:56.020 Test: admin_get_features_mandatory_features ...[2024-04-24 16:06:57.200592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.020 [2024-04-24 16:06:57.205627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.020 passed 00:10:56.020 Test: admin_get_features_optional_features ...[2024-04-24 16:06:57.290159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.020 [2024-04-24 16:06:57.293178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.279 passed 00:10:56.279 Test: admin_set_features_number_of_queues ...[2024-04-24 16:06:57.377177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.279 [2024-04-24 16:06:57.481985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.279 passed 00:10:56.539 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 16:06:57.568256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.539 [2024-04-24 16:06:57.571282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.539 passed 00:10:56.540 Test: admin_get_log_page_with_lpo ...[2024-04-24 16:06:57.656613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.540 [2024-04-24 16:06:57.724771] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:56.540 [2024-04-24 16:06:57.737859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.540 passed 00:10:56.540 Test: fabric_property_get ...[2024-04-24 16:06:57.819683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.540 [2024-04-24 16:06:57.820973] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:56.540 [2024-04-24 16:06:57.822691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.800 passed 00:10:56.800 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 16:06:57.907237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.800 [2024-04-24 16:06:57.908498] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:56.800 [2024-04-24 16:06:57.910258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.800 passed 00:10:56.800 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 16:06:57.992411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.800 [2024-04-24 16:06:58.075750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.060 [2024-04-24 16:06:58.091750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.060 [2024-04-24 16:06:58.099962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.060 passed 00:10:57.060 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 16:06:58.181614] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.060 [2024-04-24 16:06:58.182910] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:57.060 [2024-04-24 16:06:58.184638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.060 passed 00:10:57.060 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 16:06:58.267830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.060 [2024-04-24 16:06:58.341767] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:57.320 [2024-04-24 16:06:58.365750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.320 [2024-04-24 16:06:58.370852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.320 passed 00:10:57.320 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 16:06:58.457579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.320 [2024-04-24 16:06:58.458868] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:57.320 [2024-04-24 16:06:58.458922] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:57.320 [2024-04-24 16:06:58.460600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.320 passed 00:10:57.320 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 16:06:58.542858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.579 [2024-04-24 16:06:58.636750] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:57.579 [2024-04-24 16:06:58.644753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:57.579 [2024-04-24 16:06:58.652754] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:57.579 [2024-04-24 16:06:58.660751] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:57.579 [2024-04-24 16:06:58.689863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.579 passed 00:10:57.579 Test: admin_create_io_sq_verify_pc ...[2024-04-24 16:06:58.773861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.579 [2024-04-24 16:06:58.790767] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:57.579 [2024-04-24 16:06:58.808259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.579 passed 00:10:57.839 Test: admin_create_io_qp_max_qps ...[2024-04-24 16:06:58.890819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:58.779 [2024-04-24 16:06:59.986758] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:59.346 [2024-04-24 16:07:00.360028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:59.346 passed 00:10:59.346 Test: admin_create_io_sq_shared_cq ...[2024-04-24 16:07:00.448598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:59.346 [2024-04-24 16:07:00.579750] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:59.346 [2024-04-24 16:07:00.616849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:59.606 passed 00:10:59.606 00:10:59.606 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.606 suites 1 1 n/a 0 0 00:10:59.606 tests 18 18 18 0 0 00:10:59.606 asserts 360 360 360 0 n/a 00:10:59.606 00:10:59.606 Elapsed time = 1.562 seconds 00:10:59.606 16:07:00 -- compliance/compliance.sh@42 -- # killprocess 3355398 00:10:59.606 16:07:00 -- common/autotest_common.sh@936 -- # '[' -z 3355398 ']' 00:10:59.606 16:07:00 -- common/autotest_common.sh@940 -- # kill -0 3355398 00:10:59.606 16:07:00 -- common/autotest_common.sh@941 -- # uname 00:10:59.606 16:07:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.606 16:07:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3355398 00:10:59.606 16:07:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.606 16:07:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.606 16:07:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3355398' 00:10:59.606 killing process with pid 3355398 00:10:59.606 16:07:00 -- common/autotest_common.sh@955 -- # kill 3355398 00:10:59.606 16:07:00 -- common/autotest_common.sh@960 -- # wait 3355398 00:10:59.865 16:07:00 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:59.865 16:07:00 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:59.865 00:10:59.865 real 0m5.790s 00:10:59.865 user 0m16.187s 00:10:59.865 sys 0m0.551s 00:10:59.865 16:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.865 16:07:00 -- common/autotest_common.sh@10 -- # set +x 00:10:59.865 ************************************ 00:10:59.865 END TEST nvmf_vfio_user_nvme_compliance 00:10:59.865 ************************************ 00:10:59.865 16:07:01 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:59.865 16:07:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.865 16:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.865 16:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.865 ************************************ 00:10:59.865 START TEST nvmf_vfio_user_fuzz 00:10:59.865 ************************************ 00:10:59.865 16:07:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:00.123 * Looking for test storage... 00:11:00.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.123 16:07:01 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.123 16:07:01 -- nvmf/common.sh@7 -- # uname -s 00:11:00.123 16:07:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.123 16:07:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.123 16:07:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.123 16:07:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.123 16:07:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.123 16:07:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.123 16:07:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.123 16:07:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.123 16:07:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.123 16:07:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.123 16:07:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.123 16:07:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.123 16:07:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.123 16:07:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.123 16:07:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.123 16:07:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.123 16:07:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.123 16:07:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.123 16:07:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.123 16:07:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.123 16:07:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.123 16:07:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.123 16:07:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.124 16:07:01 -- paths/export.sh@5 -- # export PATH 00:11:00.124 16:07:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.124 16:07:01 -- nvmf/common.sh@47 -- # : 0 00:11:00.124 16:07:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.124 16:07:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.124 16:07:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.124 16:07:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.124 16:07:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.124 16:07:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.124 16:07:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.124 16:07:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3356243 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3356243' 00:11:00.124 Process pid: 3356243 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:00.124 16:07:01 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3356243 00:11:00.124 16:07:01 -- common/autotest_common.sh@817 -- # '[' -z 3356243 ']' 00:11:00.124 16:07:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.124 16:07:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:00.124 16:07:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.124 16:07:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:00.124 16:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:00.383 16:07:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:00.383 16:07:01 -- common/autotest_common.sh@850 -- # return 0 00:11:00.383 16:07:01 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:01.319 16:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.319 16:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.319 16:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:01.319 16:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.319 16:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.319 malloc0 00:11:01.319 16:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:01.319 16:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.319 16:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.319 16:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:01.319 16:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.319 16:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.319 16:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:01.319 16:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.319 16:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.319 16:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:01.319 16:07:02 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:33.385 Fuzzing completed. Shutting down the fuzz application 00:11:33.385 00:11:33.385 Dumping successful admin opcodes: 00:11:33.385 8, 9, 10, 24, 00:11:33.385 Dumping successful io opcodes: 00:11:33.385 0, 00:11:33.385 NS: 0x200003a1ef00 I/O qp, Total commands completed: 558463, total successful commands: 2149, random_seed: 741276672 00:11:33.386 NS: 0x200003a1ef00 admin qp, Total commands completed: 120645, total successful commands: 989, random_seed: 2144188480 00:11:33.386 16:07:33 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:33.386 16:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.386 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:33.386 16:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.386 16:07:33 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3356243 00:11:33.386 16:07:33 -- common/autotest_common.sh@936 -- # '[' -z 3356243 ']' 00:11:33.386 16:07:33 -- common/autotest_common.sh@940 -- # kill -0 3356243 00:11:33.386 16:07:33 -- common/autotest_common.sh@941 -- # uname 00:11:33.386 16:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.386 16:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3356243 00:11:33.386 16:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:33.386 16:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:33.386 16:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3356243' 00:11:33.386 killing process with pid 3356243 00:11:33.386 16:07:33 -- common/autotest_common.sh@955 -- # kill 3356243 00:11:33.386 16:07:33 -- common/autotest_common.sh@960 -- # wait 3356243 00:11:33.386 16:07:33 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:33.386 16:07:33 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:33.386 00:11:33.386 real 0m32.370s 00:11:33.386 user 0m30.788s 00:11:33.386 sys 0m28.763s 00:11:33.386 16:07:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.386 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:33.386 ************************************ 00:11:33.386 END TEST nvmf_vfio_user_fuzz 00:11:33.386 ************************************ 00:11:33.386 16:07:33 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:33.386 16:07:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.386 16:07:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.386 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:33.386 ************************************ 00:11:33.386 START TEST nvmf_host_management 00:11:33.386 ************************************ 00:11:33.386 16:07:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:33.386 * Looking for test storage... 00:11:33.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.386 16:07:33 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.386 16:07:33 -- nvmf/common.sh@7 -- # uname -s 00:11:33.386 16:07:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.386 16:07:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.386 16:07:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.386 16:07:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.386 16:07:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.386 16:07:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.386 16:07:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.386 16:07:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.386 16:07:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.386 16:07:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.386 16:07:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.386 16:07:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.386 16:07:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.386 16:07:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.386 16:07:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.386 16:07:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.386 16:07:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.386 16:07:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.386 16:07:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.386 16:07:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.386 16:07:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.386 16:07:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.386 16:07:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.386 16:07:33 -- paths/export.sh@5 -- # export PATH 00:11:33.386 16:07:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.386 16:07:33 -- nvmf/common.sh@47 -- # : 0 00:11:33.386 16:07:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.386 16:07:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.386 16:07:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.386 16:07:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.386 16:07:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.386 16:07:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.386 16:07:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.386 16:07:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.386 16:07:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.386 16:07:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.386 16:07:33 -- target/host_management.sh@105 -- # nvmftestinit 00:11:33.386 16:07:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:33.386 16:07:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.386 16:07:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:33.386 16:07:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:33.386 16:07:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:33.386 16:07:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.386 16:07:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.386 16:07:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.386 16:07:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:33.386 16:07:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:33.386 16:07:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.386 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:34.765 16:07:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:34.765 16:07:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.765 16:07:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.765 16:07:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.766 16:07:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.766 16:07:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.766 16:07:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.766 16:07:35 -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.766 16:07:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.766 16:07:35 -- nvmf/common.sh@296 -- # e810=() 00:11:34.766 16:07:35 -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.766 16:07:35 -- nvmf/common.sh@297 -- # x722=() 00:11:34.766 16:07:35 -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.766 16:07:35 -- nvmf/common.sh@298 -- # mlx=() 00:11:34.766 16:07:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.766 16:07:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.766 16:07:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.766 16:07:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:34.766 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:34.766 16:07:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.766 16:07:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:34.766 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:34.766 16:07:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.766 16:07:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.766 16:07:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.766 16:07:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.766 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.766 16:07:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.766 16:07:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.766 16:07:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.766 16:07:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.766 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.766 16:07:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:34.766 16:07:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:34.766 16:07:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.766 16:07:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.766 16:07:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.766 16:07:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.766 16:07:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.766 16:07:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.766 16:07:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.766 16:07:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.766 16:07:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.766 16:07:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.766 16:07:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.766 16:07:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.766 16:07:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.766 16:07:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.766 16:07:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.766 16:07:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.766 16:07:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.766 16:07:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.766 16:07:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:11:34.766 00:11:34.766 --- 10.0.0.2 ping statistics --- 00:11:34.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.766 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:34.766 16:07:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:11:34.766 00:11:34.766 --- 10.0.0.1 ping statistics --- 00:11:34.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.766 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:34.766 16:07:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.766 16:07:35 -- nvmf/common.sh@411 -- # return 0 00:11:34.766 16:07:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:34.766 16:07:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.766 16:07:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:34.766 16:07:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.766 16:07:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:34.766 16:07:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:34.766 16:07:35 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:34.766 16:07:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:34.766 16:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.766 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:11:34.766 ************************************ 00:11:34.766 START TEST nvmf_host_management 00:11:34.766 ************************************ 00:11:34.766 16:07:35 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:34.766 16:07:35 -- target/host_management.sh@69 -- # starttarget 00:11:34.766 16:07:35 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:34.766 16:07:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:34.766 16:07:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:34.766 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:11:34.766 16:07:35 -- nvmf/common.sh@470 -- # nvmfpid=3361721 00:11:34.766 16:07:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:34.766 16:07:35 -- nvmf/common.sh@471 -- # waitforlisten 3361721 00:11:34.766 16:07:35 -- common/autotest_common.sh@817 -- # '[' -z 3361721 ']' 00:11:34.766 16:07:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.766 16:07:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:34.766 16:07:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.766 16:07:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:34.766 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:11:34.766 [2024-04-24 16:07:35.988926] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:11:34.766 [2024-04-24 16:07:35.989005] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.766 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.026 [2024-04-24 16:07:36.062084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.026 [2024-04-24 16:07:36.176670] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.026 [2024-04-24 16:07:36.176754] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.027 [2024-04-24 16:07:36.176773] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.027 [2024-04-24 16:07:36.176786] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.027 [2024-04-24 16:07:36.176799] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.027 [2024-04-24 16:07:36.177111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.027 [2024-04-24 16:07:36.177166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.027 [2024-04-24 16:07:36.177232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.027 [2024-04-24 16:07:36.177235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.963 16:07:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:35.963 16:07:36 -- common/autotest_common.sh@850 -- # return 0 00:11:35.963 16:07:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:35.963 16:07:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.963 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.963 16:07:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.963 16:07:36 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.963 16:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.963 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.963 [2024-04-24 16:07:36.951565] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.964 16:07:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.964 16:07:36 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:35.964 16:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.964 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.964 16:07:36 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:35.964 16:07:36 -- target/host_management.sh@23 -- # cat 00:11:35.964 16:07:36 -- target/host_management.sh@30 -- # rpc_cmd 00:11:35.964 16:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.964 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.964 Malloc0 00:11:35.964 [2024-04-24 16:07:37.012419] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.964 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.964 16:07:37 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:35.964 16:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.964 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:35.964 16:07:37 -- target/host_management.sh@73 -- # perfpid=3361895 00:11:35.964 16:07:37 -- target/host_management.sh@74 -- # waitforlisten 3361895 /var/tmp/bdevperf.sock 00:11:35.964 16:07:37 -- common/autotest_common.sh@817 -- # '[' -z 3361895 ']' 00:11:35.964 16:07:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:35.964 16:07:37 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:35.964 16:07:37 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:35.964 16:07:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.964 16:07:37 -- nvmf/common.sh@521 -- # config=() 00:11:35.964 16:07:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:35.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:35.964 16:07:37 -- nvmf/common.sh@521 -- # local subsystem config 00:11:35.964 16:07:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.964 16:07:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:35.964 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:35.964 16:07:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:35.964 { 00:11:35.964 "params": { 00:11:35.964 "name": "Nvme$subsystem", 00:11:35.964 "trtype": "$TEST_TRANSPORT", 00:11:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.964 "adrfam": "ipv4", 00:11:35.964 "trsvcid": "$NVMF_PORT", 00:11:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.964 "hdgst": ${hdgst:-false}, 00:11:35.964 "ddgst": ${ddgst:-false} 00:11:35.964 }, 00:11:35.964 "method": "bdev_nvme_attach_controller" 00:11:35.964 } 00:11:35.964 EOF 00:11:35.964 )") 00:11:35.964 16:07:37 -- nvmf/common.sh@543 -- # cat 00:11:35.964 16:07:37 -- nvmf/common.sh@545 -- # jq . 00:11:35.964 16:07:37 -- nvmf/common.sh@546 -- # IFS=, 00:11:35.964 16:07:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:35.964 "params": { 00:11:35.964 "name": "Nvme0", 00:11:35.964 "trtype": "tcp", 00:11:35.964 "traddr": "10.0.0.2", 00:11:35.964 "adrfam": "ipv4", 00:11:35.964 "trsvcid": "4420", 00:11:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:35.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:35.964 "hdgst": false, 00:11:35.964 "ddgst": false 00:11:35.964 }, 00:11:35.964 "method": "bdev_nvme_attach_controller" 00:11:35.964 }' 00:11:35.964 [2024-04-24 16:07:37.089821] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:11:35.964 [2024-04-24 16:07:37.089910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361895 ] 00:11:35.964 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.964 [2024-04-24 16:07:37.150413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.223 [2024-04-24 16:07:37.254330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.484 Running I/O for 10 seconds... 00:11:36.484 16:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.484 16:07:37 -- common/autotest_common.sh@850 -- # return 0 00:11:36.484 16:07:37 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:36.484 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.484 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.484 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.484 16:07:37 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.484 16:07:37 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:36.484 16:07:37 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:36.484 16:07:37 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:36.484 16:07:37 -- target/host_management.sh@52 -- # local ret=1 00:11:36.484 16:07:37 -- target/host_management.sh@53 -- # local i 00:11:36.484 16:07:37 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:36.484 16:07:37 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:36.484 16:07:37 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:36.484 16:07:37 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:36.484 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.484 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.484 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.484 16:07:37 -- target/host_management.sh@55 -- # read_io_count=67 00:11:36.484 16:07:37 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:36.484 16:07:37 -- target/host_management.sh@62 -- # sleep 0.25 00:11:36.746 16:07:37 -- target/host_management.sh@54 -- # (( i-- )) 00:11:36.746 16:07:37 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:36.746 16:07:37 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:36.746 16:07:37 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:36.746 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.746 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.746 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.746 16:07:37 -- target/host_management.sh@55 -- # read_io_count=515 00:11:36.746 16:07:37 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:36.746 16:07:37 -- target/host_management.sh@59 -- # ret=0 00:11:36.746 16:07:37 -- target/host_management.sh@60 -- # break 00:11:36.746 16:07:37 -- target/host_management.sh@64 -- # return 0 00:11:36.746 16:07:37 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:36.746 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.746 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.746 [2024-04-24 16:07:37.919394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.919647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac1ec0 is same with the state(5) to be set 00:11:36.746 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.746 16:07:37 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:36.746 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.746 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.746 [2024-04-24 16:07:37.927352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.746 [2024-04-24 16:07:37.927392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.927411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.746 [2024-04-24 16:07:37.927424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.927438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.746 [2024-04-24 16:07:37.927458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.927471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.746 [2024-04-24 16:07:37.927484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.927497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2197160 is same with the state(5) to be set 00:11:36.746 [2024-04-24 16:07:37.928366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.746 [2024-04-24 16:07:37.928897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.746 [2024-04-24 16:07:37.928910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.928926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.928940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.928955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.928970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.928985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.928999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.929971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.929986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.930001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.930065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.930079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.930095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.930108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.747 [2024-04-24 16:07:37.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.747 [2024-04-24 16:07:37.930136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.748 [2024-04-24 16:07:37.930410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.748 [2024-04-24 16:07:37.930488] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25c7db0 was disconnected and freed. reset controller. 00:11:36.748 [2024-04-24 16:07:37.931630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:36.748 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.748 16:07:37 -- target/host_management.sh@87 -- # sleep 1 00:11:36.748 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:36.748 00:11:36.748 Latency(us) 00:11:36.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.748 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:36.748 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:36.748 Verification LBA range: start 0x0 length 0x400 00:11:36.748 Nvme0n1 : 0.41 1405.79 87.86 156.20 0.00 39840.77 2548.62 36700.16 00:11:36.748 =================================================================================================================== 00:11:36.748 Total : 1405.79 87.86 156.20 0.00 39840.77 2548.62 36700.16 00:11:36.748 [2024-04-24 16:07:37.933539] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:36.748 [2024-04-24 16:07:37.933568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2197160 (9): Bad file descriptor 00:11:36.748 [2024-04-24 16:07:38.026929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:37.684 16:07:38 -- target/host_management.sh@91 -- # kill -9 3361895 00:11:37.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3361895) - No such process 00:11:37.684 16:07:38 -- target/host_management.sh@91 -- # true 00:11:37.684 16:07:38 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:37.684 16:07:38 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:37.684 16:07:38 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:37.684 16:07:38 -- nvmf/common.sh@521 -- # config=() 00:11:37.684 16:07:38 -- nvmf/common.sh@521 -- # local subsystem config 00:11:37.684 16:07:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:37.684 16:07:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:37.684 { 00:11:37.684 "params": { 00:11:37.684 "name": "Nvme$subsystem", 00:11:37.684 "trtype": "$TEST_TRANSPORT", 00:11:37.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.684 "adrfam": "ipv4", 00:11:37.684 "trsvcid": "$NVMF_PORT", 00:11:37.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.684 "hdgst": ${hdgst:-false}, 00:11:37.684 "ddgst": ${ddgst:-false} 00:11:37.684 }, 00:11:37.684 "method": "bdev_nvme_attach_controller" 00:11:37.685 } 00:11:37.685 EOF 00:11:37.685 )") 00:11:37.685 16:07:38 -- nvmf/common.sh@543 -- # cat 00:11:37.685 16:07:38 -- nvmf/common.sh@545 -- # jq . 00:11:37.685 16:07:38 -- nvmf/common.sh@546 -- # IFS=, 00:11:37.685 16:07:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:37.685 "params": { 00:11:37.685 "name": "Nvme0", 00:11:37.685 "trtype": "tcp", 00:11:37.685 "traddr": "10.0.0.2", 00:11:37.685 "adrfam": "ipv4", 00:11:37.685 "trsvcid": "4420", 00:11:37.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:37.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:37.685 "hdgst": false, 00:11:37.685 "ddgst": false 00:11:37.685 }, 00:11:37.685 "method": "bdev_nvme_attach_controller" 00:11:37.685 }' 00:11:37.944 [2024-04-24 16:07:38.980160] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:11:37.944 [2024-04-24 16:07:38.980243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362057 ] 00:11:37.944 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.944 [2024-04-24 16:07:39.042421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.944 [2024-04-24 16:07:39.146408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.203 Running I/O for 1 seconds... 00:11:39.144 00:11:39.144 Latency(us) 00:11:39.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.144 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:39.144 Verification LBA range: start 0x0 length 0x400 00:11:39.144 Nvme0n1 : 1.02 1500.17 93.76 0.00 0.00 41996.67 10194.49 36505.98 00:11:39.144 =================================================================================================================== 00:11:39.144 Total : 1500.17 93.76 0.00 0.00 41996.67 10194.49 36505.98 00:11:39.402 16:07:40 -- target/host_management.sh@102 -- # stoptarget 00:11:39.402 16:07:40 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:39.402 16:07:40 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:39.402 16:07:40 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:39.402 16:07:40 -- target/host_management.sh@40 -- # nvmftestfini 00:11:39.402 16:07:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:39.402 16:07:40 -- nvmf/common.sh@117 -- # sync 00:11:39.402 16:07:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.402 16:07:40 -- nvmf/common.sh@120 -- # set +e 00:11:39.402 16:07:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.402 16:07:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.402 rmmod nvme_tcp 00:11:39.402 rmmod nvme_fabrics 00:11:39.687 rmmod nvme_keyring 00:11:39.687 16:07:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.687 16:07:40 -- nvmf/common.sh@124 -- # set -e 00:11:39.687 16:07:40 -- nvmf/common.sh@125 -- # return 0 00:11:39.687 16:07:40 -- nvmf/common.sh@478 -- # '[' -n 3361721 ']' 00:11:39.687 16:07:40 -- nvmf/common.sh@479 -- # killprocess 3361721 00:11:39.687 16:07:40 -- common/autotest_common.sh@936 -- # '[' -z 3361721 ']' 00:11:39.687 16:07:40 -- common/autotest_common.sh@940 -- # kill -0 3361721 00:11:39.687 16:07:40 -- common/autotest_common.sh@941 -- # uname 00:11:39.687 16:07:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.687 16:07:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3361721 00:11:39.687 16:07:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:39.687 16:07:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:39.687 16:07:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3361721' 00:11:39.687 killing process with pid 3361721 00:11:39.687 16:07:40 -- common/autotest_common.sh@955 -- # kill 3361721 00:11:39.687 16:07:40 -- common/autotest_common.sh@960 -- # wait 3361721 00:11:39.967 [2024-04-24 16:07:40.993005] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:39.967 16:07:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:39.967 16:07:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:39.967 16:07:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:39.967 16:07:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.967 16:07:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.967 16:07:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.967 16:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.967 16:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.882 16:07:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.882 00:11:41.882 real 0m7.129s 00:11:41.882 user 0m21.784s 00:11:41.882 sys 0m1.173s 00:11:41.882 16:07:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:41.882 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:41.882 ************************************ 00:11:41.882 END TEST nvmf_host_management 00:11:41.882 ************************************ 00:11:41.882 16:07:43 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:41.882 00:11:41.882 real 0m9.472s 00:11:41.882 user 0m22.643s 00:11:41.882 sys 0m2.673s 00:11:41.882 16:07:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:41.882 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:41.882 ************************************ 00:11:41.882 END TEST nvmf_host_management 00:11:41.882 ************************************ 00:11:41.882 16:07:43 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:41.882 16:07:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.882 16:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.882 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:42.140 ************************************ 00:11:42.140 START TEST nvmf_lvol 00:11:42.140 ************************************ 00:11:42.140 16:07:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:42.140 * Looking for test storage... 00:11:42.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.140 16:07:43 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.140 16:07:43 -- nvmf/common.sh@7 -- # uname -s 00:11:42.141 16:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.141 16:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.141 16:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.141 16:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.141 16:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.141 16:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.141 16:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.141 16:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.141 16:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.141 16:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.141 16:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:42.141 16:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:42.141 16:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.141 16:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.141 16:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.141 16:07:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.141 16:07:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.141 16:07:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.141 16:07:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.141 16:07:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.141 16:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.141 16:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.141 16:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.141 16:07:43 -- paths/export.sh@5 -- # export PATH 00:11:42.141 16:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.141 16:07:43 -- nvmf/common.sh@47 -- # : 0 00:11:42.141 16:07:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.141 16:07:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.141 16:07:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.141 16:07:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.141 16:07:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.141 16:07:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.141 16:07:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.141 16:07:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.141 16:07:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:42.141 16:07:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:42.141 16:07:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.141 16:07:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:42.141 16:07:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:42.141 16:07:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:42.141 16:07:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.141 16:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.141 16:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.141 16:07:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:42.141 16:07:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:42.141 16:07:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.141 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 16:07:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:44.047 16:07:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.047 16:07:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.047 16:07:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.048 16:07:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.048 16:07:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.048 16:07:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.048 16:07:45 -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.048 16:07:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.048 16:07:45 -- nvmf/common.sh@296 -- # e810=() 00:11:44.048 16:07:45 -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.048 16:07:45 -- nvmf/common.sh@297 -- # x722=() 00:11:44.048 16:07:45 -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.048 16:07:45 -- nvmf/common.sh@298 -- # mlx=() 00:11:44.048 16:07:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.048 16:07:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.048 16:07:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.048 16:07:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:44.048 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:44.048 16:07:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.048 16:07:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:44.048 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:44.048 16:07:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.048 16:07:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.048 16:07:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.048 16:07:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:44.048 Found net devices under 0000:09:00.0: cvl_0_0 00:11:44.048 16:07:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.048 16:07:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.048 16:07:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.048 16:07:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:44.048 Found net devices under 0000:09:00.1: cvl_0_1 00:11:44.048 16:07:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:44.048 16:07:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:44.048 16:07:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.048 16:07:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.048 16:07:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.048 16:07:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.048 16:07:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.048 16:07:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.048 16:07:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.048 16:07:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.048 16:07:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.048 16:07:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.048 16:07:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.048 16:07:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.048 16:07:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.048 16:07:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.048 16:07:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.048 16:07:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.048 16:07:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.048 16:07:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.048 16:07:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:11:44.048 00:11:44.048 --- 10.0.0.2 ping statistics --- 00:11:44.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.048 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:44.048 16:07:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:44.048 00:11:44.048 --- 10.0.0.1 ping statistics --- 00:11:44.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.048 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:44.048 16:07:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.048 16:07:45 -- nvmf/common.sh@411 -- # return 0 00:11:44.048 16:07:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:44.048 16:07:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.048 16:07:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:44.048 16:07:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.048 16:07:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:44.048 16:07:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:44.048 16:07:45 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:44.048 16:07:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:44.048 16:07:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:44.048 16:07:45 -- common/autotest_common.sh@10 -- # set +x 00:11:44.048 16:07:45 -- nvmf/common.sh@470 -- # nvmfpid=3364275 00:11:44.048 16:07:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:44.048 16:07:45 -- nvmf/common.sh@471 -- # waitforlisten 3364275 00:11:44.048 16:07:45 -- common/autotest_common.sh@817 -- # '[' -z 3364275 ']' 00:11:44.048 16:07:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.048 16:07:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:44.048 16:07:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.048 16:07:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:44.048 16:07:45 -- common/autotest_common.sh@10 -- # set +x 00:11:44.048 [2024-04-24 16:07:45.331516] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:11:44.048 [2024-04-24 16:07:45.331608] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.308 [2024-04-24 16:07:45.403237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.308 [2024-04-24 16:07:45.514722] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.308 [2024-04-24 16:07:45.514802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.308 [2024-04-24 16:07:45.514830] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.308 [2024-04-24 16:07:45.514844] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.308 [2024-04-24 16:07:45.514857] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.308 [2024-04-24 16:07:45.514941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.308 [2024-04-24 16:07:45.514993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.308 [2024-04-24 16:07:45.515011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.243 16:07:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:45.243 16:07:46 -- common/autotest_common.sh@850 -- # return 0 00:11:45.243 16:07:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:45.243 16:07:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.243 16:07:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.243 16:07:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.243 16:07:46 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:45.243 [2024-04-24 16:07:46.521910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.503 16:07:46 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:45.761 16:07:46 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:45.761 16:07:46 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:46.019 16:07:47 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:46.019 16:07:47 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:46.019 16:07:47 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:46.586 16:07:47 -- target/nvmf_lvol.sh@29 -- # lvs=3e04e7b4-d523-4501-a9d4-ba7d9301de39 00:11:46.586 16:07:47 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e04e7b4-d523-4501-a9d4-ba7d9301de39 lvol 20 00:11:46.586 16:07:47 -- target/nvmf_lvol.sh@32 -- # lvol=5395d5d1-1834-4ed0-94f3-1fbee898a0d6 00:11:46.586 16:07:47 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:46.844 16:07:48 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5395d5d1-1834-4ed0-94f3-1fbee898a0d6 00:11:47.102 16:07:48 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:47.360 [2024-04-24 16:07:48.518832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.360 16:07:48 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.618 16:07:48 -- target/nvmf_lvol.sh@42 -- # perf_pid=3364706 00:11:47.618 16:07:48 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:47.618 16:07:48 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:47.618 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.556 16:07:49 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5395d5d1-1834-4ed0-94f3-1fbee898a0d6 MY_SNAPSHOT 00:11:48.813 16:07:50 -- target/nvmf_lvol.sh@47 -- # snapshot=998f8700-1971-455c-a803-6da4cc0880f6 00:11:48.813 16:07:50 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5395d5d1-1834-4ed0-94f3-1fbee898a0d6 30 00:11:49.071 16:07:50 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 998f8700-1971-455c-a803-6da4cc0880f6 MY_CLONE 00:11:49.638 16:07:50 -- target/nvmf_lvol.sh@49 -- # clone=0f6d4394-81ba-4805-b03b-ef400abf5696 00:11:49.638 16:07:50 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0f6d4394-81ba-4805-b03b-ef400abf5696 00:11:49.897 16:07:51 -- target/nvmf_lvol.sh@53 -- # wait 3364706 00:11:58.029 Initializing NVMe Controllers 00:11:58.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:58.029 Controller IO queue size 128, less than required. 00:11:58.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:58.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:58.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:58.029 Initialization complete. Launching workers. 00:11:58.029 ======================================================== 00:11:58.029 Latency(us) 00:11:58.029 Device Information : IOPS MiB/s Average min max 00:11:58.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10577.40 41.32 12107.87 1330.72 78190.24 00:11:58.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10549.60 41.21 12142.81 2563.21 82630.17 00:11:58.029 ======================================================== 00:11:58.029 Total : 21127.00 82.53 12125.32 1330.72 82630.17 00:11:58.029 00:11:58.029 16:07:59 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:58.288 16:07:59 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5395d5d1-1834-4ed0-94f3-1fbee898a0d6 00:11:58.546 16:07:59 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e04e7b4-d523-4501-a9d4-ba7d9301de39 00:11:58.806 16:07:59 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:58.806 16:07:59 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:58.806 16:07:59 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:58.806 16:07:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:58.806 16:07:59 -- nvmf/common.sh@117 -- # sync 00:11:58.806 16:07:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.806 16:07:59 -- nvmf/common.sh@120 -- # set +e 00:11:58.806 16:07:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.806 16:07:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.806 rmmod nvme_tcp 00:11:58.806 rmmod nvme_fabrics 00:11:58.806 rmmod nvme_keyring 00:11:58.806 16:07:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.806 16:07:59 -- nvmf/common.sh@124 -- # set -e 00:11:58.806 16:07:59 -- nvmf/common.sh@125 -- # return 0 00:11:58.806 16:07:59 -- nvmf/common.sh@478 -- # '[' -n 3364275 ']' 00:11:58.806 16:07:59 -- nvmf/common.sh@479 -- # killprocess 3364275 00:11:58.806 16:07:59 -- common/autotest_common.sh@936 -- # '[' -z 3364275 ']' 00:11:58.806 16:07:59 -- common/autotest_common.sh@940 -- # kill -0 3364275 00:11:58.806 16:07:59 -- common/autotest_common.sh@941 -- # uname 00:11:58.806 16:07:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.806 16:07:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3364275 00:11:58.806 16:08:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.806 16:08:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.806 16:08:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3364275' 00:11:58.806 killing process with pid 3364275 00:11:58.806 16:08:00 -- common/autotest_common.sh@955 -- # kill 3364275 00:11:58.806 16:08:00 -- common/autotest_common.sh@960 -- # wait 3364275 00:11:59.065 16:08:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:59.065 16:08:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:59.065 16:08:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:59.065 16:08:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.065 16:08:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.065 16:08:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.065 16:08:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.065 16:08:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.631 16:08:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.631 00:12:01.631 real 0m19.155s 00:12:01.631 user 1m4.813s 00:12:01.631 sys 0m5.948s 00:12:01.631 16:08:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:01.631 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 ************************************ 00:12:01.631 END TEST nvmf_lvol 00:12:01.631 ************************************ 00:12:01.631 16:08:02 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:01.631 16:08:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.631 16:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.631 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 ************************************ 00:12:01.631 START TEST nvmf_lvs_grow 00:12:01.631 ************************************ 00:12:01.631 16:08:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:01.631 * Looking for test storage... 00:12:01.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.631 16:08:02 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.631 16:08:02 -- nvmf/common.sh@7 -- # uname -s 00:12:01.631 16:08:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.631 16:08:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.631 16:08:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.631 16:08:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.631 16:08:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.631 16:08:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.631 16:08:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.631 16:08:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.631 16:08:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.631 16:08:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.631 16:08:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.631 16:08:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.631 16:08:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.631 16:08:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.631 16:08:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.631 16:08:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.631 16:08:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.631 16:08:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.631 16:08:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.631 16:08:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.631 16:08:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.631 16:08:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.631 16:08:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.631 16:08:02 -- paths/export.sh@5 -- # export PATH 00:12:01.631 16:08:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.631 16:08:02 -- nvmf/common.sh@47 -- # : 0 00:12:01.632 16:08:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.632 16:08:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.632 16:08:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.632 16:08:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.632 16:08:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.632 16:08:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.632 16:08:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.632 16:08:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.632 16:08:02 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.632 16:08:02 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:01.632 16:08:02 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:12:01.632 16:08:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:01.632 16:08:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.632 16:08:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:01.632 16:08:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:01.632 16:08:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:01.632 16:08:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.632 16:08:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.632 16:08:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.632 16:08:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:01.632 16:08:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:01.632 16:08:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.632 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.533 16:08:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:03.533 16:08:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:03.533 16:08:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:03.533 16:08:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:03.533 16:08:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:03.533 16:08:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:03.533 16:08:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:03.533 16:08:04 -- nvmf/common.sh@295 -- # net_devs=() 00:12:03.533 16:08:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:03.533 16:08:04 -- nvmf/common.sh@296 -- # e810=() 00:12:03.533 16:08:04 -- nvmf/common.sh@296 -- # local -ga e810 00:12:03.533 16:08:04 -- nvmf/common.sh@297 -- # x722=() 00:12:03.533 16:08:04 -- nvmf/common.sh@297 -- # local -ga x722 00:12:03.533 16:08:04 -- nvmf/common.sh@298 -- # mlx=() 00:12:03.533 16:08:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:03.533 16:08:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.533 16:08:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:03.533 16:08:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:03.533 16:08:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.533 16:08:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:03.533 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:03.533 16:08:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.533 16:08:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:03.533 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:03.533 16:08:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.533 16:08:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.533 16:08:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.533 16:08:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:03.533 Found net devices under 0000:09:00.0: cvl_0_0 00:12:03.533 16:08:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.533 16:08:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.533 16:08:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.533 16:08:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.533 16:08:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:03.533 Found net devices under 0000:09:00.1: cvl_0_1 00:12:03.533 16:08:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.533 16:08:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:03.533 16:08:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:03.533 16:08:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:03.533 16:08:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.533 16:08:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.533 16:08:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.533 16:08:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:03.533 16:08:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.533 16:08:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.533 16:08:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:03.533 16:08:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.533 16:08:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.533 16:08:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:03.533 16:08:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:03.533 16:08:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.533 16:08:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.533 16:08:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.533 16:08:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.533 16:08:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:03.533 16:08:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.533 16:08:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.533 16:08:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.533 16:08:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:03.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:12:03.533 00:12:03.533 --- 10.0.0.2 ping statistics --- 00:12:03.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.534 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:03.534 16:08:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:03.534 00:12:03.534 --- 10.0.0.1 ping statistics --- 00:12:03.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.534 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:03.534 16:08:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.534 16:08:04 -- nvmf/common.sh@411 -- # return 0 00:12:03.534 16:08:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:03.534 16:08:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.534 16:08:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:03.534 16:08:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:03.534 16:08:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.534 16:08:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:03.534 16:08:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:03.534 16:08:04 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:12:03.534 16:08:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:03.534 16:08:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:03.534 16:08:04 -- common/autotest_common.sh@10 -- # set +x 00:12:03.534 16:08:04 -- nvmf/common.sh@470 -- # nvmfpid=3367976 00:12:03.534 16:08:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:03.534 16:08:04 -- nvmf/common.sh@471 -- # waitforlisten 3367976 00:12:03.534 16:08:04 -- common/autotest_common.sh@817 -- # '[' -z 3367976 ']' 00:12:03.534 16:08:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.534 16:08:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.534 16:08:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.534 16:08:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.534 16:08:04 -- common/autotest_common.sh@10 -- # set +x 00:12:03.534 [2024-04-24 16:08:04.749621] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:03.534 [2024-04-24 16:08:04.749694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.534 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.534 [2024-04-24 16:08:04.817977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.793 [2024-04-24 16:08:04.929591] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.793 [2024-04-24 16:08:04.929659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.793 [2024-04-24 16:08:04.929685] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.793 [2024-04-24 16:08:04.929699] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.793 [2024-04-24 16:08:04.929711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.793 [2024-04-24 16:08:04.929783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.730 16:08:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.730 16:08:05 -- common/autotest_common.sh@850 -- # return 0 00:12:04.730 16:08:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:04.730 16:08:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:04.730 16:08:05 -- common/autotest_common.sh@10 -- # set +x 00:12:04.730 16:08:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.730 16:08:05 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:04.730 [2024-04-24 16:08:05.988472] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.730 16:08:06 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:12:04.730 16:08:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:04.730 16:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.730 16:08:06 -- common/autotest_common.sh@10 -- # set +x 00:12:04.988 ************************************ 00:12:04.988 START TEST lvs_grow_clean 00:12:04.988 ************************************ 00:12:04.988 16:08:06 -- common/autotest_common.sh@1111 -- # lvs_grow 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:04.988 16:08:06 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.247 16:08:06 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:05.247 16:08:06 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:05.507 16:08:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:05.507 16:08:06 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:05.507 16:08:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:05.767 16:08:06 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:05.767 16:08:06 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:05.767 16:08:06 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 lvol 150 00:12:06.026 16:08:07 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4661948e-91b3-4a0b-8df1-4d8e327cb4cc 00:12:06.026 16:08:07 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.026 16:08:07 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:06.285 [2024-04-24 16:08:07.331888] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:06.285 [2024-04-24 16:08:07.331976] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:06.285 true 00:12:06.285 16:08:07 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:06.285 16:08:07 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:06.545 16:08:07 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:06.545 16:08:07 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:06.805 16:08:07 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4661948e-91b3-4a0b-8df1-4d8e327cb4cc 00:12:06.805 16:08:08 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:07.064 [2024-04-24 16:08:08.310913] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.064 16:08:08 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:07.322 16:08:08 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3368510 00:12:07.322 16:08:08 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:07.322 16:08:08 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.322 16:08:08 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3368510 /var/tmp/bdevperf.sock 00:12:07.322 16:08:08 -- common/autotest_common.sh@817 -- # '[' -z 3368510 ']' 00:12:07.322 16:08:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.322 16:08:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.322 16:08:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.322 16:08:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.322 16:08:08 -- common/autotest_common.sh@10 -- # set +x 00:12:07.580 [2024-04-24 16:08:08.612189] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:07.580 [2024-04-24 16:08:08.612267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368510 ] 00:12:07.580 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.580 [2024-04-24 16:08:08.674344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.580 [2024-04-24 16:08:08.784764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.839 16:08:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.839 16:08:08 -- common/autotest_common.sh@850 -- # return 0 00:12:07.839 16:08:08 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:08.097 Nvme0n1 00:12:08.097 16:08:09 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:08.355 [ 00:12:08.355 { 00:12:08.355 "name": "Nvme0n1", 00:12:08.355 "aliases": [ 00:12:08.355 "4661948e-91b3-4a0b-8df1-4d8e327cb4cc" 00:12:08.355 ], 00:12:08.355 "product_name": "NVMe disk", 00:12:08.355 "block_size": 4096, 00:12:08.355 "num_blocks": 38912, 00:12:08.355 "uuid": "4661948e-91b3-4a0b-8df1-4d8e327cb4cc", 00:12:08.355 "assigned_rate_limits": { 00:12:08.355 "rw_ios_per_sec": 0, 00:12:08.355 "rw_mbytes_per_sec": 0, 00:12:08.355 "r_mbytes_per_sec": 0, 00:12:08.355 "w_mbytes_per_sec": 0 00:12:08.355 }, 00:12:08.355 "claimed": false, 00:12:08.355 "zoned": false, 00:12:08.355 "supported_io_types": { 00:12:08.355 "read": true, 00:12:08.355 "write": true, 00:12:08.355 "unmap": true, 00:12:08.355 "write_zeroes": true, 00:12:08.355 "flush": true, 00:12:08.355 "reset": true, 00:12:08.355 "compare": true, 00:12:08.355 "compare_and_write": true, 00:12:08.355 "abort": true, 00:12:08.355 "nvme_admin": true, 00:12:08.355 "nvme_io": true 00:12:08.355 }, 00:12:08.355 "memory_domains": [ 00:12:08.355 { 00:12:08.355 "dma_device_id": "system", 00:12:08.355 "dma_device_type": 1 00:12:08.355 } 00:12:08.355 ], 00:12:08.355 "driver_specific": { 00:12:08.355 "nvme": [ 00:12:08.355 { 00:12:08.355 "trid": { 00:12:08.355 "trtype": "TCP", 00:12:08.355 "adrfam": "IPv4", 00:12:08.355 "traddr": "10.0.0.2", 00:12:08.355 "trsvcid": "4420", 00:12:08.355 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:08.355 }, 00:12:08.355 "ctrlr_data": { 00:12:08.355 "cntlid": 1, 00:12:08.355 "vendor_id": "0x8086", 00:12:08.355 "model_number": "SPDK bdev Controller", 00:12:08.355 "serial_number": "SPDK0", 00:12:08.355 "firmware_revision": "24.05", 00:12:08.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:08.355 "oacs": { 00:12:08.355 "security": 0, 00:12:08.355 "format": 0, 00:12:08.355 "firmware": 0, 00:12:08.355 "ns_manage": 0 00:12:08.355 }, 00:12:08.355 "multi_ctrlr": true, 00:12:08.355 "ana_reporting": false 00:12:08.355 }, 00:12:08.355 "vs": { 00:12:08.355 "nvme_version": "1.3" 00:12:08.355 }, 00:12:08.355 "ns_data": { 00:12:08.355 "id": 1, 00:12:08.355 "can_share": true 00:12:08.355 } 00:12:08.355 } 00:12:08.355 ], 00:12:08.355 "mp_policy": "active_passive" 00:12:08.355 } 00:12:08.356 } 00:12:08.356 ] 00:12:08.356 16:08:09 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3368567 00:12:08.356 16:08:09 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:08.356 16:08:09 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:08.356 Running I/O for 10 seconds... 00:12:09.292 Latency(us) 00:12:09.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.292 Nvme0n1 : 1.00 13277.00 51.86 0.00 0.00 0.00 0.00 0.00 00:12:09.292 =================================================================================================================== 00:12:09.292 Total : 13277.00 51.86 0.00 0.00 0.00 0.00 0.00 00:12:09.292 00:12:10.227 16:08:11 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:10.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.485 Nvme0n1 : 2.00 13386.50 52.29 0.00 0.00 0.00 0.00 0.00 00:12:10.485 =================================================================================================================== 00:12:10.485 Total : 13386.50 52.29 0.00 0.00 0.00 0.00 0.00 00:12:10.485 00:12:10.485 true 00:12:10.485 16:08:11 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:10.485 16:08:11 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:10.743 16:08:11 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:10.743 16:08:11 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:10.743 16:08:11 -- target/nvmf_lvs_grow.sh@65 -- # wait 3368567 00:12:11.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.311 Nvme0n1 : 3.00 13471.00 52.62 0.00 0.00 0.00 0.00 0.00 00:12:11.311 =================================================================================================================== 00:12:11.311 Total : 13471.00 52.62 0.00 0.00 0.00 0.00 0.00 00:12:11.311 00:12:12.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.285 Nvme0n1 : 4.00 13557.25 52.96 0.00 0.00 0.00 0.00 0.00 00:12:12.285 =================================================================================================================== 00:12:12.285 Total : 13557.25 52.96 0.00 0.00 0.00 0.00 0.00 00:12:12.285 00:12:13.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.251 Nvme0n1 : 5.00 13602.60 53.14 0.00 0.00 0.00 0.00 0.00 00:12:13.251 =================================================================================================================== 00:12:13.251 Total : 13602.60 53.14 0.00 0.00 0.00 0.00 0.00 00:12:13.251 00:12:14.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.632 Nvme0n1 : 6.00 13639.50 53.28 0.00 0.00 0.00 0.00 0.00 00:12:14.632 =================================================================================================================== 00:12:14.632 Total : 13639.50 53.28 0.00 0.00 0.00 0.00 0.00 00:12:14.632 00:12:15.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.570 Nvme0n1 : 7.00 13686.43 53.46 0.00 0.00 0.00 0.00 0.00 00:12:15.570 =================================================================================================================== 00:12:15.570 Total : 13686.43 53.46 0.00 0.00 0.00 0.00 0.00 00:12:15.570 00:12:16.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.510 Nvme0n1 : 8.00 13715.62 53.58 0.00 0.00 0.00 0.00 0.00 00:12:16.510 =================================================================================================================== 00:12:16.510 Total : 13715.62 53.58 0.00 0.00 0.00 0.00 0.00 00:12:16.510 00:12:17.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.448 Nvme0n1 : 9.00 13737.44 53.66 0.00 0.00 0.00 0.00 0.00 00:12:17.448 =================================================================================================================== 00:12:17.448 Total : 13737.44 53.66 0.00 0.00 0.00 0.00 0.00 00:12:17.448 00:12:18.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.386 Nvme0n1 : 10.00 13754.90 53.73 0.00 0.00 0.00 0.00 0.00 00:12:18.386 =================================================================================================================== 00:12:18.386 Total : 13754.90 53.73 0.00 0.00 0.00 0.00 0.00 00:12:18.386 00:12:18.386 00:12:18.386 Latency(us) 00:12:18.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.386 Nvme0n1 : 10.01 13755.19 53.73 0.00 0.00 9297.61 7378.87 17476.27 00:12:18.386 =================================================================================================================== 00:12:18.386 Total : 13755.19 53.73 0.00 0.00 9297.61 7378.87 17476.27 00:12:18.386 0 00:12:18.386 16:08:19 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3368510 00:12:18.386 16:08:19 -- common/autotest_common.sh@936 -- # '[' -z 3368510 ']' 00:12:18.386 16:08:19 -- common/autotest_common.sh@940 -- # kill -0 3368510 00:12:18.386 16:08:19 -- common/autotest_common.sh@941 -- # uname 00:12:18.386 16:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.386 16:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3368510 00:12:18.386 16:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:18.386 16:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:18.386 16:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3368510' 00:12:18.386 killing process with pid 3368510 00:12:18.386 16:08:19 -- common/autotest_common.sh@955 -- # kill 3368510 00:12:18.386 Received shutdown signal, test time was about 10.000000 seconds 00:12:18.386 00:12:18.386 Latency(us) 00:12:18.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.386 =================================================================================================================== 00:12:18.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:18.386 16:08:19 -- common/autotest_common.sh@960 -- # wait 3368510 00:12:18.644 16:08:19 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:18.902 16:08:20 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:18.902 16:08:20 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:19.159 16:08:20 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:19.159 16:08:20 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:12:19.159 16:08:20 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:19.418 [2024-04-24 16:08:20.655579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:19.418 16:08:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:19.418 16:08:20 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.419 16:08:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:19.419 16:08:20 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.419 16:08:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.419 16:08:20 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.419 16:08:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.419 16:08:20 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.419 16:08:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.419 16:08:20 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.419 16:08:20 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:19.419 16:08:20 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:19.678 request: 00:12:19.678 { 00:12:19.678 "uuid": "4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334", 00:12:19.678 "method": "bdev_lvol_get_lvstores", 00:12:19.678 "req_id": 1 00:12:19.678 } 00:12:19.678 Got JSON-RPC error response 00:12:19.678 response: 00:12:19.678 { 00:12:19.678 "code": -19, 00:12:19.678 "message": "No such device" 00:12:19.678 } 00:12:19.678 16:08:20 -- common/autotest_common.sh@641 -- # es=1 00:12:19.678 16:08:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.678 16:08:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:19.678 16:08:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.678 16:08:20 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:19.939 aio_bdev 00:12:19.939 16:08:21 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4661948e-91b3-4a0b-8df1-4d8e327cb4cc 00:12:19.939 16:08:21 -- common/autotest_common.sh@885 -- # local bdev_name=4661948e-91b3-4a0b-8df1-4d8e327cb4cc 00:12:19.939 16:08:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:19.939 16:08:21 -- common/autotest_common.sh@887 -- # local i 00:12:19.939 16:08:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:19.939 16:08:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:19.939 16:08:21 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:20.200 16:08:21 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4661948e-91b3-4a0b-8df1-4d8e327cb4cc -t 2000 00:12:20.459 [ 00:12:20.459 { 00:12:20.459 "name": "4661948e-91b3-4a0b-8df1-4d8e327cb4cc", 00:12:20.459 "aliases": [ 00:12:20.459 "lvs/lvol" 00:12:20.459 ], 00:12:20.459 "product_name": "Logical Volume", 00:12:20.459 "block_size": 4096, 00:12:20.459 "num_blocks": 38912, 00:12:20.459 "uuid": "4661948e-91b3-4a0b-8df1-4d8e327cb4cc", 00:12:20.459 "assigned_rate_limits": { 00:12:20.459 "rw_ios_per_sec": 0, 00:12:20.459 "rw_mbytes_per_sec": 0, 00:12:20.459 "r_mbytes_per_sec": 0, 00:12:20.459 "w_mbytes_per_sec": 0 00:12:20.459 }, 00:12:20.459 "claimed": false, 00:12:20.459 "zoned": false, 00:12:20.459 "supported_io_types": { 00:12:20.459 "read": true, 00:12:20.459 "write": true, 00:12:20.459 "unmap": true, 00:12:20.459 "write_zeroes": true, 00:12:20.459 "flush": false, 00:12:20.459 "reset": true, 00:12:20.459 "compare": false, 00:12:20.459 "compare_and_write": false, 00:12:20.459 "abort": false, 00:12:20.459 "nvme_admin": false, 00:12:20.459 "nvme_io": false 00:12:20.459 }, 00:12:20.459 "driver_specific": { 00:12:20.459 "lvol": { 00:12:20.459 "lvol_store_uuid": "4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334", 00:12:20.459 "base_bdev": "aio_bdev", 00:12:20.459 "thin_provision": false, 00:12:20.459 "snapshot": false, 00:12:20.459 "clone": false, 00:12:20.459 "esnap_clone": false 00:12:20.459 } 00:12:20.459 } 00:12:20.459 } 00:12:20.459 ] 00:12:20.459 16:08:21 -- common/autotest_common.sh@893 -- # return 0 00:12:20.459 16:08:21 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:20.459 16:08:21 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:20.717 16:08:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:20.717 16:08:21 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:20.717 16:08:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:20.976 16:08:22 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:20.976 16:08:22 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4661948e-91b3-4a0b-8df1-4d8e327cb4cc 00:12:21.236 16:08:22 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bc45f4d-7046-4e2e-9d0b-9d5cb5f91334 00:12:21.496 16:08:22 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:21.755 16:08:22 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.755 00:12:21.755 real 0m16.801s 00:12:21.755 user 0m16.164s 00:12:21.755 sys 0m1.887s 00:12:21.755 16:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:21.755 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:12:21.755 ************************************ 00:12:21.755 END TEST lvs_grow_clean 00:12:21.755 ************************************ 00:12:21.755 16:08:22 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:21.755 16:08:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:21.755 16:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.755 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:12:21.755 ************************************ 00:12:21.755 START TEST lvs_grow_dirty 00:12:21.755 ************************************ 00:12:21.755 16:08:23 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.755 16:08:23 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:22.324 16:08:23 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:22.324 16:08:23 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:22.324 16:08:23 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e0990f4-78da-480c-8826-e24ae1503182 00:12:22.324 16:08:23 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:22.324 16:08:23 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:22.584 16:08:23 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:22.584 16:08:23 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:22.584 16:08:23 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e0990f4-78da-480c-8826-e24ae1503182 lvol 150 00:12:22.845 16:08:24 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:22.845 16:08:24 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.845 16:08:24 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:23.103 [2024-04-24 16:08:24.255860] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:23.103 [2024-04-24 16:08:24.255947] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:23.103 true 00:12:23.103 16:08:24 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:23.103 16:08:24 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:23.361 16:08:24 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:23.361 16:08:24 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:23.620 16:08:24 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:23.881 16:08:25 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:24.140 16:08:25 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.398 16:08:25 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3370487 00:12:24.398 16:08:25 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:24.398 16:08:25 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:24.398 16:08:25 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3370487 /var/tmp/bdevperf.sock 00:12:24.398 16:08:25 -- common/autotest_common.sh@817 -- # '[' -z 3370487 ']' 00:12:24.398 16:08:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.398 16:08:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.398 16:08:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.398 16:08:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.398 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.398 [2024-04-24 16:08:25.531252] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:24.398 [2024-04-24 16:08:25.531330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370487 ] 00:12:24.398 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.398 [2024-04-24 16:08:25.593112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.657 [2024-04-24 16:08:25.707119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.657 16:08:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.657 16:08:25 -- common/autotest_common.sh@850 -- # return 0 00:12:24.657 16:08:25 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:24.915 Nvme0n1 00:12:24.915 16:08:26 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:25.174 [ 00:12:25.174 { 00:12:25.174 "name": "Nvme0n1", 00:12:25.174 "aliases": [ 00:12:25.174 "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de" 00:12:25.174 ], 00:12:25.174 "product_name": "NVMe disk", 00:12:25.174 "block_size": 4096, 00:12:25.174 "num_blocks": 38912, 00:12:25.174 "uuid": "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de", 00:12:25.174 "assigned_rate_limits": { 00:12:25.174 "rw_ios_per_sec": 0, 00:12:25.174 "rw_mbytes_per_sec": 0, 00:12:25.174 "r_mbytes_per_sec": 0, 00:12:25.174 "w_mbytes_per_sec": 0 00:12:25.174 }, 00:12:25.174 "claimed": false, 00:12:25.174 "zoned": false, 00:12:25.174 "supported_io_types": { 00:12:25.174 "read": true, 00:12:25.174 "write": true, 00:12:25.174 "unmap": true, 00:12:25.174 "write_zeroes": true, 00:12:25.174 "flush": true, 00:12:25.174 "reset": true, 00:12:25.174 "compare": true, 00:12:25.174 "compare_and_write": true, 00:12:25.174 "abort": true, 00:12:25.174 "nvme_admin": true, 00:12:25.174 "nvme_io": true 00:12:25.174 }, 00:12:25.174 "memory_domains": [ 00:12:25.174 { 00:12:25.174 "dma_device_id": "system", 00:12:25.174 "dma_device_type": 1 00:12:25.174 } 00:12:25.174 ], 00:12:25.174 "driver_specific": { 00:12:25.174 "nvme": [ 00:12:25.174 { 00:12:25.174 "trid": { 00:12:25.174 "trtype": "TCP", 00:12:25.174 "adrfam": "IPv4", 00:12:25.174 "traddr": "10.0.0.2", 00:12:25.174 "trsvcid": "4420", 00:12:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:25.174 }, 00:12:25.174 "ctrlr_data": { 00:12:25.174 "cntlid": 1, 00:12:25.174 "vendor_id": "0x8086", 00:12:25.174 "model_number": "SPDK bdev Controller", 00:12:25.174 "serial_number": "SPDK0", 00:12:25.174 "firmware_revision": "24.05", 00:12:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:25.174 "oacs": { 00:12:25.174 "security": 0, 00:12:25.174 "format": 0, 00:12:25.174 "firmware": 0, 00:12:25.174 "ns_manage": 0 00:12:25.174 }, 00:12:25.174 "multi_ctrlr": true, 00:12:25.174 "ana_reporting": false 00:12:25.174 }, 00:12:25.174 "vs": { 00:12:25.174 "nvme_version": "1.3" 00:12:25.174 }, 00:12:25.174 "ns_data": { 00:12:25.174 "id": 1, 00:12:25.174 "can_share": true 00:12:25.174 } 00:12:25.174 } 00:12:25.174 ], 00:12:25.174 "mp_policy": "active_passive" 00:12:25.174 } 00:12:25.174 } 00:12:25.174 ] 00:12:25.174 16:08:26 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3370625 00:12:25.174 16:08:26 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:25.174 16:08:26 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:25.432 Running I/O for 10 seconds... 00:12:26.372 Latency(us) 00:12:26.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.372 Nvme0n1 : 1.00 14335.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:26.372 =================================================================================================================== 00:12:26.372 Total : 14335.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:26.372 00:12:27.308 16:08:28 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:27.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.308 Nvme0n1 : 2.00 14441.00 56.41 0.00 0.00 0.00 0.00 0.00 00:12:27.308 =================================================================================================================== 00:12:27.308 Total : 14441.00 56.41 0.00 0.00 0.00 0.00 0.00 00:12:27.308 00:12:27.566 true 00:12:27.566 16:08:28 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:27.566 16:08:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:27.824 16:08:29 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:27.824 16:08:29 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:27.824 16:08:29 -- target/nvmf_lvs_grow.sh@65 -- # wait 3370625 00:12:28.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.393 Nvme0n1 : 3.00 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:12:28.393 =================================================================================================================== 00:12:28.393 Total : 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:12:28.393 00:12:29.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.331 Nvme0n1 : 4.00 14631.25 57.15 0.00 0.00 0.00 0.00 0.00 00:12:29.331 =================================================================================================================== 00:12:29.331 Total : 14631.25 57.15 0.00 0.00 0.00 0.00 0.00 00:12:29.331 00:12:30.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.707 Nvme0n1 : 5.00 14662.60 57.28 0.00 0.00 0.00 0.00 0.00 00:12:30.707 =================================================================================================================== 00:12:30.707 Total : 14662.60 57.28 0.00 0.00 0.00 0.00 0.00 00:12:30.707 00:12:31.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.643 Nvme0n1 : 6.00 14767.83 57.69 0.00 0.00 0.00 0.00 0.00 00:12:31.643 =================================================================================================================== 00:12:31.643 Total : 14767.83 57.69 0.00 0.00 0.00 0.00 0.00 00:12:31.643 00:12:32.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.580 Nvme0n1 : 7.00 14838.86 57.96 0.00 0.00 0.00 0.00 0.00 00:12:32.580 =================================================================================================================== 00:12:32.580 Total : 14838.86 57.96 0.00 0.00 0.00 0.00 0.00 00:12:32.580 00:12:33.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.520 Nvme0n1 : 8.00 14850.00 58.01 0.00 0.00 0.00 0.00 0.00 00:12:33.520 =================================================================================================================== 00:12:33.520 Total : 14850.00 58.01 0.00 0.00 0.00 0.00 0.00 00:12:33.520 00:12:34.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.458 Nvme0n1 : 9.00 14893.11 58.18 0.00 0.00 0.00 0.00 0.00 00:12:34.458 =================================================================================================================== 00:12:34.458 Total : 14893.11 58.18 0.00 0.00 0.00 0.00 0.00 00:12:34.458 00:12:35.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.398 Nvme0n1 : 10.00 14946.90 58.39 0.00 0.00 0.00 0.00 0.00 00:12:35.398 =================================================================================================================== 00:12:35.398 Total : 14946.90 58.39 0.00 0.00 0.00 0.00 0.00 00:12:35.398 00:12:35.398 00:12:35.398 Latency(us) 00:12:35.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.398 Nvme0n1 : 10.01 14952.08 58.41 0.00 0.00 8556.01 4830.25 17087.91 00:12:35.398 =================================================================================================================== 00:12:35.398 Total : 14952.08 58.41 0.00 0.00 8556.01 4830.25 17087.91 00:12:35.398 0 00:12:35.398 16:08:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3370487 00:12:35.398 16:08:36 -- common/autotest_common.sh@936 -- # '[' -z 3370487 ']' 00:12:35.398 16:08:36 -- common/autotest_common.sh@940 -- # kill -0 3370487 00:12:35.398 16:08:36 -- common/autotest_common.sh@941 -- # uname 00:12:35.398 16:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.398 16:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3370487 00:12:35.398 16:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:35.398 16:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:35.398 16:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3370487' 00:12:35.398 killing process with pid 3370487 00:12:35.398 16:08:36 -- common/autotest_common.sh@955 -- # kill 3370487 00:12:35.398 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.398 00:12:35.398 Latency(us) 00:12:35.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.398 =================================================================================================================== 00:12:35.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.398 16:08:36 -- common/autotest_common.sh@960 -- # wait 3370487 00:12:35.656 16:08:36 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3367976 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@74 -- # wait 3367976 00:12:36.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3367976 Killed "${NVMF_APP[@]}" "$@" 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:36.224 16:08:37 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:36.224 16:08:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:36.224 16:08:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:36.224 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.482 16:08:37 -- nvmf/common.sh@470 -- # nvmfpid=3371946 00:12:36.482 16:08:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:36.482 16:08:37 -- nvmf/common.sh@471 -- # waitforlisten 3371946 00:12:36.482 16:08:37 -- common/autotest_common.sh@817 -- # '[' -z 3371946 ']' 00:12:36.483 16:08:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.483 16:08:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.483 16:08:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.483 16:08:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.483 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.483 [2024-04-24 16:08:37.554093] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:36.483 [2024-04-24 16:08:37.554164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.483 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.483 [2024-04-24 16:08:37.618245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.483 [2024-04-24 16:08:37.719557] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.483 [2024-04-24 16:08:37.719610] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.483 [2024-04-24 16:08:37.719623] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.483 [2024-04-24 16:08:37.719635] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.483 [2024-04-24 16:08:37.719646] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.483 [2024-04-24 16:08:37.719690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.741 16:08:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:36.741 16:08:37 -- common/autotest_common.sh@850 -- # return 0 00:12:36.741 16:08:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:36.741 16:08:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:36.741 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.741 16:08:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.741 16:08:37 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:37.001 [2024-04-24 16:08:38.126528] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:37.001 [2024-04-24 16:08:38.126662] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:37.001 [2024-04-24 16:08:38.126717] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:37.001 16:08:38 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:37.001 16:08:38 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:37.001 16:08:38 -- common/autotest_common.sh@885 -- # local bdev_name=c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:37.001 16:08:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:37.001 16:08:38 -- common/autotest_common.sh@887 -- # local i 00:12:37.001 16:08:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:37.001 16:08:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:37.001 16:08:38 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:37.261 16:08:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de -t 2000 00:12:37.521 [ 00:12:37.521 { 00:12:37.521 "name": "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de", 00:12:37.521 "aliases": [ 00:12:37.521 "lvs/lvol" 00:12:37.521 ], 00:12:37.521 "product_name": "Logical Volume", 00:12:37.521 "block_size": 4096, 00:12:37.521 "num_blocks": 38912, 00:12:37.521 "uuid": "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de", 00:12:37.521 "assigned_rate_limits": { 00:12:37.521 "rw_ios_per_sec": 0, 00:12:37.521 "rw_mbytes_per_sec": 0, 00:12:37.521 "r_mbytes_per_sec": 0, 00:12:37.521 "w_mbytes_per_sec": 0 00:12:37.521 }, 00:12:37.521 "claimed": false, 00:12:37.521 "zoned": false, 00:12:37.521 "supported_io_types": { 00:12:37.521 "read": true, 00:12:37.521 "write": true, 00:12:37.521 "unmap": true, 00:12:37.521 "write_zeroes": true, 00:12:37.521 "flush": false, 00:12:37.521 "reset": true, 00:12:37.521 "compare": false, 00:12:37.521 "compare_and_write": false, 00:12:37.521 "abort": false, 00:12:37.521 "nvme_admin": false, 00:12:37.521 "nvme_io": false 00:12:37.521 }, 00:12:37.521 "driver_specific": { 00:12:37.521 "lvol": { 00:12:37.521 "lvol_store_uuid": "1e0990f4-78da-480c-8826-e24ae1503182", 00:12:37.521 "base_bdev": "aio_bdev", 00:12:37.521 "thin_provision": false, 00:12:37.521 "snapshot": false, 00:12:37.521 "clone": false, 00:12:37.521 "esnap_clone": false 00:12:37.521 } 00:12:37.521 } 00:12:37.521 } 00:12:37.521 ] 00:12:37.521 16:08:38 -- common/autotest_common.sh@893 -- # return 0 00:12:37.521 16:08:38 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:37.521 16:08:38 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:37.779 16:08:38 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:37.779 16:08:38 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:37.779 16:08:38 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:38.039 16:08:39 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:38.039 16:08:39 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:38.323 [2024-04-24 16:08:39.347311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:38.323 16:08:39 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:38.323 16:08:39 -- common/autotest_common.sh@638 -- # local es=0 00:12:38.323 16:08:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:38.323 16:08:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.323 16:08:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:38.323 16:08:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.323 16:08:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:38.323 16:08:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.323 16:08:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:38.323 16:08:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.323 16:08:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:38.323 16:08:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:38.605 request: 00:12:38.605 { 00:12:38.605 "uuid": "1e0990f4-78da-480c-8826-e24ae1503182", 00:12:38.605 "method": "bdev_lvol_get_lvstores", 00:12:38.605 "req_id": 1 00:12:38.605 } 00:12:38.605 Got JSON-RPC error response 00:12:38.605 response: 00:12:38.605 { 00:12:38.605 "code": -19, 00:12:38.605 "message": "No such device" 00:12:38.605 } 00:12:38.605 16:08:39 -- common/autotest_common.sh@641 -- # es=1 00:12:38.605 16:08:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:38.605 16:08:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:38.605 16:08:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:38.605 16:08:39 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:38.605 aio_bdev 00:12:38.605 16:08:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:38.605 16:08:39 -- common/autotest_common.sh@885 -- # local bdev_name=c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:38.605 16:08:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:38.605 16:08:39 -- common/autotest_common.sh@887 -- # local i 00:12:38.605 16:08:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:38.605 16:08:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:38.605 16:08:39 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:38.863 16:08:40 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de -t 2000 00:12:39.122 [ 00:12:39.122 { 00:12:39.122 "name": "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de", 00:12:39.122 "aliases": [ 00:12:39.122 "lvs/lvol" 00:12:39.122 ], 00:12:39.122 "product_name": "Logical Volume", 00:12:39.122 "block_size": 4096, 00:12:39.122 "num_blocks": 38912, 00:12:39.122 "uuid": "c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de", 00:12:39.122 "assigned_rate_limits": { 00:12:39.122 "rw_ios_per_sec": 0, 00:12:39.122 "rw_mbytes_per_sec": 0, 00:12:39.122 "r_mbytes_per_sec": 0, 00:12:39.122 "w_mbytes_per_sec": 0 00:12:39.122 }, 00:12:39.122 "claimed": false, 00:12:39.122 "zoned": false, 00:12:39.122 "supported_io_types": { 00:12:39.122 "read": true, 00:12:39.122 "write": true, 00:12:39.122 "unmap": true, 00:12:39.122 "write_zeroes": true, 00:12:39.122 "flush": false, 00:12:39.122 "reset": true, 00:12:39.122 "compare": false, 00:12:39.122 "compare_and_write": false, 00:12:39.122 "abort": false, 00:12:39.122 "nvme_admin": false, 00:12:39.122 "nvme_io": false 00:12:39.122 }, 00:12:39.122 "driver_specific": { 00:12:39.122 "lvol": { 00:12:39.122 "lvol_store_uuid": "1e0990f4-78da-480c-8826-e24ae1503182", 00:12:39.122 "base_bdev": "aio_bdev", 00:12:39.122 "thin_provision": false, 00:12:39.122 "snapshot": false, 00:12:39.122 "clone": false, 00:12:39.122 "esnap_clone": false 00:12:39.122 } 00:12:39.122 } 00:12:39.122 } 00:12:39.122 ] 00:12:39.379 16:08:40 -- common/autotest_common.sh@893 -- # return 0 00:12:39.379 16:08:40 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:39.379 16:08:40 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:39.379 16:08:40 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:39.380 16:08:40 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:39.380 16:08:40 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:39.638 16:08:40 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:39.638 16:08:40 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4d9ca20-f3fc-4bd6-b2f3-46e74f42a3de 00:12:39.898 16:08:41 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e0990f4-78da-480c-8826-e24ae1503182 00:12:40.156 16:08:41 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:40.414 16:08:41 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:40.414 00:12:40.414 real 0m18.618s 00:12:40.414 user 0m47.454s 00:12:40.414 sys 0m4.873s 00:12:40.414 16:08:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.414 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 ************************************ 00:12:40.414 END TEST lvs_grow_dirty 00:12:40.414 ************************************ 00:12:40.414 16:08:41 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:40.414 16:08:41 -- common/autotest_common.sh@794 -- # type=--id 00:12:40.414 16:08:41 -- common/autotest_common.sh@795 -- # id=0 00:12:40.414 16:08:41 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:40.414 16:08:41 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:40.414 16:08:41 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:40.414 16:08:41 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:40.414 16:08:41 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:40.414 16:08:41 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:40.414 nvmf_trace.0 00:12:40.414 16:08:41 -- common/autotest_common.sh@809 -- # return 0 00:12:40.414 16:08:41 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:40.414 16:08:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:40.414 16:08:41 -- nvmf/common.sh@117 -- # sync 00:12:40.414 16:08:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.414 16:08:41 -- nvmf/common.sh@120 -- # set +e 00:12:40.414 16:08:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.414 16:08:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.414 rmmod nvme_tcp 00:12:40.675 rmmod nvme_fabrics 00:12:40.675 rmmod nvme_keyring 00:12:40.675 16:08:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.675 16:08:41 -- nvmf/common.sh@124 -- # set -e 00:12:40.675 16:08:41 -- nvmf/common.sh@125 -- # return 0 00:12:40.675 16:08:41 -- nvmf/common.sh@478 -- # '[' -n 3371946 ']' 00:12:40.675 16:08:41 -- nvmf/common.sh@479 -- # killprocess 3371946 00:12:40.675 16:08:41 -- common/autotest_common.sh@936 -- # '[' -z 3371946 ']' 00:12:40.675 16:08:41 -- common/autotest_common.sh@940 -- # kill -0 3371946 00:12:40.675 16:08:41 -- common/autotest_common.sh@941 -- # uname 00:12:40.675 16:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.675 16:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3371946 00:12:40.675 16:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:40.675 16:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:40.675 16:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3371946' 00:12:40.675 killing process with pid 3371946 00:12:40.675 16:08:41 -- common/autotest_common.sh@955 -- # kill 3371946 00:12:40.675 16:08:41 -- common/autotest_common.sh@960 -- # wait 3371946 00:12:40.936 16:08:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:40.936 16:08:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:40.936 16:08:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:40.936 16:08:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.936 16:08:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.936 16:08:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.936 16:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.936 16:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.844 16:08:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.844 00:12:42.844 real 0m41.574s 00:12:42.844 user 1m9.521s 00:12:42.844 sys 0m8.665s 00:12:42.844 16:08:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.844 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:42.844 ************************************ 00:12:42.844 END TEST nvmf_lvs_grow 00:12:42.844 ************************************ 00:12:42.844 16:08:44 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:42.844 16:08:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:42.844 16:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.844 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:43.103 ************************************ 00:12:43.103 START TEST nvmf_bdev_io_wait 00:12:43.103 ************************************ 00:12:43.103 16:08:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:43.103 * Looking for test storage... 00:12:43.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.103 16:08:44 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.103 16:08:44 -- nvmf/common.sh@7 -- # uname -s 00:12:43.103 16:08:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.103 16:08:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.103 16:08:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.103 16:08:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.103 16:08:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.103 16:08:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.103 16:08:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.103 16:08:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.103 16:08:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.103 16:08:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.103 16:08:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.103 16:08:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.103 16:08:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.103 16:08:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.103 16:08:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.103 16:08:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.103 16:08:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.103 16:08:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.103 16:08:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.103 16:08:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.103 16:08:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.103 16:08:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.104 16:08:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.104 16:08:44 -- paths/export.sh@5 -- # export PATH 00:12:43.104 16:08:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.104 16:08:44 -- nvmf/common.sh@47 -- # : 0 00:12:43.104 16:08:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.104 16:08:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.104 16:08:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.104 16:08:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.104 16:08:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.104 16:08:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.104 16:08:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.104 16:08:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.104 16:08:44 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.104 16:08:44 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.104 16:08:44 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:43.104 16:08:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:43.104 16:08:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.104 16:08:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:43.104 16:08:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:43.104 16:08:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:43.104 16:08:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.104 16:08:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.104 16:08:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.104 16:08:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:43.104 16:08:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:43.104 16:08:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.104 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.007 16:08:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:45.007 16:08:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.007 16:08:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.007 16:08:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.007 16:08:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.007 16:08:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.007 16:08:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.007 16:08:46 -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.007 16:08:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.007 16:08:46 -- nvmf/common.sh@296 -- # e810=() 00:12:45.007 16:08:46 -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.007 16:08:46 -- nvmf/common.sh@297 -- # x722=() 00:12:45.007 16:08:46 -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.007 16:08:46 -- nvmf/common.sh@298 -- # mlx=() 00:12:45.007 16:08:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.007 16:08:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.008 16:08:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.008 16:08:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.008 16:08:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.008 16:08:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:45.008 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:45.008 16:08:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.008 16:08:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:45.008 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:45.008 16:08:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.008 16:08:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.008 16:08:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.008 16:08:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:45.008 Found net devices under 0000:09:00.0: cvl_0_0 00:12:45.008 16:08:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.008 16:08:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.008 16:08:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.008 16:08:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.008 16:08:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:45.008 Found net devices under 0000:09:00.1: cvl_0_1 00:12:45.008 16:08:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.008 16:08:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:45.008 16:08:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:45.008 16:08:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:45.008 16:08:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.008 16:08:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.008 16:08:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.008 16:08:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.008 16:08:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.008 16:08:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.008 16:08:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.008 16:08:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.008 16:08:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.008 16:08:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.008 16:08:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.008 16:08:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.266 16:08:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.266 16:08:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.266 16:08:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.266 16:08:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.266 16:08:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.266 16:08:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.266 16:08:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.266 16:08:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:12:45.266 00:12:45.266 --- 10.0.0.2 ping statistics --- 00:12:45.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.266 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:12:45.266 16:08:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:12:45.266 00:12:45.266 --- 10.0.0.1 ping statistics --- 00:12:45.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.266 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:12:45.266 16:08:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.266 16:08:46 -- nvmf/common.sh@411 -- # return 0 00:12:45.266 16:08:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:45.266 16:08:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.266 16:08:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:45.266 16:08:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:45.266 16:08:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.266 16:08:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:45.266 16:08:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:45.266 16:08:46 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:45.266 16:08:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:45.266 16:08:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:45.266 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.266 16:08:46 -- nvmf/common.sh@470 -- # nvmfpid=3374473 00:12:45.266 16:08:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:45.266 16:08:46 -- nvmf/common.sh@471 -- # waitforlisten 3374473 00:12:45.266 16:08:46 -- common/autotest_common.sh@817 -- # '[' -z 3374473 ']' 00:12:45.266 16:08:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.266 16:08:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:45.266 16:08:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.266 16:08:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:45.266 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.266 [2024-04-24 16:08:46.466417] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:45.266 [2024-04-24 16:08:46.466485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.266 [2024-04-24 16:08:46.531052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.527 [2024-04-24 16:08:46.638592] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.527 [2024-04-24 16:08:46.638654] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.527 [2024-04-24 16:08:46.638668] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.527 [2024-04-24 16:08:46.638680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.527 [2024-04-24 16:08:46.638690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.527 [2024-04-24 16:08:46.638752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.527 [2024-04-24 16:08:46.638780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.527 [2024-04-24 16:08:46.638839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.527 [2024-04-24 16:08:46.638842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.527 16:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.527 16:08:46 -- common/autotest_common.sh@850 -- # return 0 00:12:45.527 16:08:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:45.527 16:08:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:45.527 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 16:08:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.527 16:08:46 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:45.527 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.527 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.527 16:08:46 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:45.527 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.527 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.527 16:08:46 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.527 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.527 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 [2024-04-24 16:08:46.785614] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.527 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.527 16:08:46 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.527 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.527 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 Malloc0 00:12:45.786 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.786 16:08:46 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.786 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.786 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.786 16:08:46 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.786 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.786 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.786 16:08:46 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.786 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.787 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 [2024-04-24 16:08:46.852440] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.787 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3374505 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@30 -- # READ_PID=3374507 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # config=() 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:12:45.787 16:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3374509 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:45.787 { 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme$subsystem", 00:12:45.787 "trtype": "$TEST_TRANSPORT", 00:12:45.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "$NVMF_PORT", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.787 "hdgst": ${hdgst:-false}, 00:12:45.787 "ddgst": ${ddgst:-false} 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 } 00:12:45.787 EOF 00:12:45.787 )") 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # config=() 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:12:45.787 16:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:45.787 { 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme$subsystem", 00:12:45.787 "trtype": "$TEST_TRANSPORT", 00:12:45.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "$NVMF_PORT", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.787 "hdgst": ${hdgst:-false}, 00:12:45.787 "ddgst": ${ddgst:-false} 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 } 00:12:45.787 EOF 00:12:45.787 )") 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3374511 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@35 -- # sync 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # config=() 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # cat 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:45.787 16:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:45.787 { 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme$subsystem", 00:12:45.787 "trtype": "$TEST_TRANSPORT", 00:12:45.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "$NVMF_PORT", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.787 "hdgst": ${hdgst:-false}, 00:12:45.787 "ddgst": ${ddgst:-false} 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 } 00:12:45.787 EOF 00:12:45.787 )") 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # config=() 00:12:45.787 16:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:12:45.787 16:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:45.787 { 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme$subsystem", 00:12:45.787 "trtype": "$TEST_TRANSPORT", 00:12:45.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "$NVMF_PORT", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.787 "hdgst": ${hdgst:-false}, 00:12:45.787 "ddgst": ${ddgst:-false} 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 } 00:12:45.787 EOF 00:12:45.787 )") 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # cat 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # cat 00:12:45.787 16:08:46 -- target/bdev_io_wait.sh@37 -- # wait 3374505 00:12:45.787 16:08:46 -- nvmf/common.sh@543 -- # cat 00:12:45.787 16:08:46 -- nvmf/common.sh@545 -- # jq . 00:12:45.787 16:08:46 -- nvmf/common.sh@545 -- # jq . 00:12:45.787 16:08:46 -- nvmf/common.sh@545 -- # jq . 00:12:45.787 16:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:12:45.787 16:08:46 -- nvmf/common.sh@545 -- # jq . 00:12:45.787 16:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme1", 00:12:45.787 "trtype": "tcp", 00:12:45.787 "traddr": "10.0.0.2", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "4420", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.787 "hdgst": false, 00:12:45.787 "ddgst": false 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 }' 00:12:45.787 16:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:12:45.787 16:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme1", 00:12:45.787 "trtype": "tcp", 00:12:45.787 "traddr": "10.0.0.2", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "4420", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.787 "hdgst": false, 00:12:45.787 "ddgst": false 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 }' 00:12:45.787 16:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:12:45.787 16:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:45.787 "params": { 00:12:45.787 "name": "Nvme1", 00:12:45.787 "trtype": "tcp", 00:12:45.787 "traddr": "10.0.0.2", 00:12:45.787 "adrfam": "ipv4", 00:12:45.787 "trsvcid": "4420", 00:12:45.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.787 "hdgst": false, 00:12:45.787 "ddgst": false 00:12:45.787 }, 00:12:45.787 "method": "bdev_nvme_attach_controller" 00:12:45.787 }' 00:12:45.788 16:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:12:45.788 16:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:45.788 "params": { 00:12:45.788 "name": "Nvme1", 00:12:45.788 "trtype": "tcp", 00:12:45.788 "traddr": "10.0.0.2", 00:12:45.788 "adrfam": "ipv4", 00:12:45.788 "trsvcid": "4420", 00:12:45.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.788 "hdgst": false, 00:12:45.788 "ddgst": false 00:12:45.788 }, 00:12:45.788 "method": "bdev_nvme_attach_controller" 00:12:45.788 }' 00:12:45.788 [2024-04-24 16:08:46.899955] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:45.788 [2024-04-24 16:08:46.899955] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:45.788 [2024-04-24 16:08:46.899955] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:45.788 [2024-04-24 16:08:46.899963] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:45.788 [2024-04-24 16:08:46.900050] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 16:08:46.900051] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 16:08:46.900051] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 16:08:46.900051] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:45.788 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:45.788 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:45.788 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:45.788 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.788 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.047 [2024-04-24 16:08:47.073101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.047 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.047 [2024-04-24 16:08:47.172567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:46.047 [2024-04-24 16:08:47.180912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.047 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.047 [2024-04-24 16:08:47.277964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:46.047 [2024-04-24 16:08:47.278940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.305 [2024-04-24 16:08:47.352906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.305 [2024-04-24 16:08:47.379885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:46.305 [2024-04-24 16:08:47.444868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:46.305 Running I/O for 1 seconds... 00:12:46.305 Running I/O for 1 seconds... 00:12:46.305 Running I/O for 1 seconds... 00:12:46.305 Running I/O for 1 seconds... 00:12:47.283 00:12:47.283 Latency(us) 00:12:47.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.283 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:47.283 Nvme1n1 : 1.00 202377.87 790.54 0.00 0.00 629.91 251.83 873.81 00:12:47.283 =================================================================================================================== 00:12:47.283 Total : 202377.87 790.54 0.00 0.00 629.91 251.83 873.81 00:12:47.542 00:12:47.542 Latency(us) 00:12:47.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.542 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:47.542 Nvme1n1 : 1.02 5859.80 22.89 0.00 0.00 21576.78 8786.68 31457.28 00:12:47.542 =================================================================================================================== 00:12:47.542 Total : 5859.80 22.89 0.00 0.00 21576.78 8786.68 31457.28 00:12:47.542 00:12:47.542 Latency(us) 00:12:47.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.542 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:47.542 Nvme1n1 : 1.01 9842.77 38.45 0.00 0.00 12942.16 8932.31 23981.32 00:12:47.542 =================================================================================================================== 00:12:47.542 Total : 9842.77 38.45 0.00 0.00 12942.16 8932.31 23981.32 00:12:47.542 00:12:47.542 Latency(us) 00:12:47.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.542 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:47.542 Nvme1n1 : 1.01 5804.34 22.67 0.00 0.00 21978.70 6116.69 45438.29 00:12:47.542 =================================================================================================================== 00:12:47.542 Total : 5804.34 22.67 0.00 0.00 21978.70 6116.69 45438.29 00:12:47.800 16:08:48 -- target/bdev_io_wait.sh@38 -- # wait 3374507 00:12:47.800 16:08:48 -- target/bdev_io_wait.sh@39 -- # wait 3374509 00:12:47.800 16:08:48 -- target/bdev_io_wait.sh@40 -- # wait 3374511 00:12:47.800 16:08:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.800 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.800 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:12:47.800 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.800 16:08:49 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:47.800 16:08:49 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:47.800 16:08:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:47.800 16:08:49 -- nvmf/common.sh@117 -- # sync 00:12:47.800 16:08:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.800 16:08:49 -- nvmf/common.sh@120 -- # set +e 00:12:47.800 16:08:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.800 16:08:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.800 rmmod nvme_tcp 00:12:47.800 rmmod nvme_fabrics 00:12:47.800 rmmod nvme_keyring 00:12:47.800 16:08:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.800 16:08:49 -- nvmf/common.sh@124 -- # set -e 00:12:47.800 16:08:49 -- nvmf/common.sh@125 -- # return 0 00:12:47.800 16:08:49 -- nvmf/common.sh@478 -- # '[' -n 3374473 ']' 00:12:47.800 16:08:49 -- nvmf/common.sh@479 -- # killprocess 3374473 00:12:47.800 16:08:49 -- common/autotest_common.sh@936 -- # '[' -z 3374473 ']' 00:12:47.800 16:08:49 -- common/autotest_common.sh@940 -- # kill -0 3374473 00:12:47.800 16:08:49 -- common/autotest_common.sh@941 -- # uname 00:12:47.800 16:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.800 16:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3374473 00:12:47.800 16:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:47.800 16:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:47.800 16:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3374473' 00:12:47.800 killing process with pid 3374473 00:12:47.800 16:08:49 -- common/autotest_common.sh@955 -- # kill 3374473 00:12:47.800 16:08:49 -- common/autotest_common.sh@960 -- # wait 3374473 00:12:48.058 16:08:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:48.058 16:08:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:48.058 16:08:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:48.058 16:08:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.058 16:08:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.058 16:08:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.058 16:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.058 16:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.599 16:08:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:50.599 00:12:50.599 real 0m7.165s 00:12:50.599 user 0m16.449s 00:12:50.599 sys 0m3.332s 00:12:50.599 16:08:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:50.599 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:12:50.599 ************************************ 00:12:50.599 END TEST nvmf_bdev_io_wait 00:12:50.599 ************************************ 00:12:50.599 16:08:51 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:50.599 16:08:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:50.599 16:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.599 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:12:50.599 ************************************ 00:12:50.599 START TEST nvmf_queue_depth 00:12:50.599 ************************************ 00:12:50.599 16:08:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:50.599 * Looking for test storage... 00:12:50.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.599 16:08:51 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.599 16:08:51 -- nvmf/common.sh@7 -- # uname -s 00:12:50.599 16:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.599 16:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.599 16:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.599 16:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.599 16:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.599 16:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.599 16:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.599 16:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.599 16:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.599 16:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.599 16:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.599 16:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.599 16:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.599 16:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.599 16:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.599 16:08:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.599 16:08:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.599 16:08:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.599 16:08:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.600 16:08:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.600 16:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.600 16:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.600 16:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.600 16:08:51 -- paths/export.sh@5 -- # export PATH 00:12:50.600 16:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.600 16:08:51 -- nvmf/common.sh@47 -- # : 0 00:12:50.600 16:08:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.600 16:08:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.600 16:08:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.600 16:08:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.600 16:08:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.600 16:08:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.600 16:08:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.600 16:08:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.600 16:08:51 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:50.600 16:08:51 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:50.600 16:08:51 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:50.600 16:08:51 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:50.600 16:08:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:50.600 16:08:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.600 16:08:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:50.600 16:08:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:50.600 16:08:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:50.600 16:08:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.600 16:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.600 16:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.600 16:08:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:50.600 16:08:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:50.600 16:08:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:50.600 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:12:52.502 16:08:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:52.502 16:08:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.502 16:08:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.502 16:08:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.502 16:08:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.502 16:08:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.502 16:08:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.502 16:08:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.502 16:08:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.502 16:08:53 -- nvmf/common.sh@296 -- # e810=() 00:12:52.502 16:08:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.502 16:08:53 -- nvmf/common.sh@297 -- # x722=() 00:12:52.502 16:08:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.502 16:08:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:52.502 16:08:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.502 16:08:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.502 16:08:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.502 16:08:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.502 16:08:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.502 16:08:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:52.502 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:52.502 16:08:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.502 16:08:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:52.502 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:52.502 16:08:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.502 16:08:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.502 16:08:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.502 16:08:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:52.502 Found net devices under 0000:09:00.0: cvl_0_0 00:12:52.502 16:08:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.502 16:08:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.502 16:08:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.502 16:08:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.502 16:08:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:52.502 Found net devices under 0000:09:00.1: cvl_0_1 00:12:52.502 16:08:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.502 16:08:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:52.502 16:08:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:52.502 16:08:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:52.502 16:08:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.502 16:08:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.502 16:08:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.502 16:08:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.502 16:08:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.502 16:08:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.503 16:08:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.503 16:08:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.503 16:08:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.503 16:08:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.503 16:08:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.503 16:08:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.503 16:08:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.503 16:08:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.503 16:08:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.503 16:08:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.503 16:08:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.503 16:08:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.503 16:08:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.503 16:08:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:52.503 00:12:52.503 --- 10.0.0.2 ping statistics --- 00:12:52.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.503 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:52.503 16:08:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:52.503 00:12:52.503 --- 10.0.0.1 ping statistics --- 00:12:52.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.503 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:52.503 16:08:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.503 16:08:53 -- nvmf/common.sh@411 -- # return 0 00:12:52.503 16:08:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:52.503 16:08:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.503 16:08:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:52.503 16:08:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:52.503 16:08:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.503 16:08:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:52.503 16:08:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:52.503 16:08:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:52.503 16:08:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:52.503 16:08:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:52.503 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.503 16:08:53 -- nvmf/common.sh@470 -- # nvmfpid=3376730 00:12:52.503 16:08:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:52.503 16:08:53 -- nvmf/common.sh@471 -- # waitforlisten 3376730 00:12:52.503 16:08:53 -- common/autotest_common.sh@817 -- # '[' -z 3376730 ']' 00:12:52.503 16:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.503 16:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:52.503 16:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.503 16:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:52.503 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.503 [2024-04-24 16:08:53.691272] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:52.503 [2024-04-24 16:08:53.691365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.503 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.503 [2024-04-24 16:08:53.761227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.762 [2024-04-24 16:08:53.876713] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.762 [2024-04-24 16:08:53.876798] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.762 [2024-04-24 16:08:53.876821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.762 [2024-04-24 16:08:53.876833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.762 [2024-04-24 16:08:53.876844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.762 [2024-04-24 16:08:53.876877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.698 16:08:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:53.698 16:08:54 -- common/autotest_common.sh@850 -- # return 0 00:12:53.698 16:08:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:53.698 16:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 16:08:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.698 16:08:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.698 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 [2024-04-24 16:08:54.679633] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.698 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.698 16:08:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:53.698 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 Malloc0 00:12:53.698 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.698 16:08:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:53.698 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.698 16:08:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.698 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.698 16:08:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.698 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 [2024-04-24 16:08:54.741622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.698 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.698 16:08:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=3376887 00:12:53.698 16:08:54 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:53.698 16:08:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.698 16:08:54 -- target/queue_depth.sh@33 -- # waitforlisten 3376887 /var/tmp/bdevperf.sock 00:12:53.698 16:08:54 -- common/autotest_common.sh@817 -- # '[' -z 3376887 ']' 00:12:53.698 16:08:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.698 16:08:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.698 16:08:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.698 16:08:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.698 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:53.698 [2024-04-24 16:08:54.796287] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:12:53.698 [2024-04-24 16:08:54.796372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376887 ] 00:12:53.698 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.698 [2024-04-24 16:08:54.856213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.698 [2024-04-24 16:08:54.959134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.956 16:08:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:53.956 16:08:55 -- common/autotest_common.sh@850 -- # return 0 00:12:53.956 16:08:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:53.956 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.956 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:12:53.956 NVMe0n1 00:12:53.956 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.956 16:08:55 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:54.215 Running I/O for 10 seconds... 00:13:04.191 00:13:04.191 Latency(us) 00:13:04.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.191 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:04.192 Verification LBA range: start 0x0 length 0x4000 00:13:04.192 NVMe0n1 : 10.10 7867.88 30.73 0.00 0.00 129471.21 23981.32 77672.30 00:13:04.192 =================================================================================================================== 00:13:04.192 Total : 7867.88 30.73 0.00 0.00 129471.21 23981.32 77672.30 00:13:04.192 0 00:13:04.192 16:09:05 -- target/queue_depth.sh@39 -- # killprocess 3376887 00:13:04.192 16:09:05 -- common/autotest_common.sh@936 -- # '[' -z 3376887 ']' 00:13:04.192 16:09:05 -- common/autotest_common.sh@940 -- # kill -0 3376887 00:13:04.192 16:09:05 -- common/autotest_common.sh@941 -- # uname 00:13:04.192 16:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.192 16:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3376887 00:13:04.192 16:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.192 16:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.192 16:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3376887' 00:13:04.192 killing process with pid 3376887 00:13:04.192 16:09:05 -- common/autotest_common.sh@955 -- # kill 3376887 00:13:04.192 Received shutdown signal, test time was about 10.000000 seconds 00:13:04.192 00:13:04.192 Latency(us) 00:13:04.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.192 =================================================================================================================== 00:13:04.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:04.192 16:09:05 -- common/autotest_common.sh@960 -- # wait 3376887 00:13:04.450 16:09:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:04.450 16:09:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:04.450 16:09:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.450 16:09:05 -- nvmf/common.sh@117 -- # sync 00:13:04.451 16:09:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.451 16:09:05 -- nvmf/common.sh@120 -- # set +e 00:13:04.451 16:09:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.451 16:09:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.451 rmmod nvme_tcp 00:13:04.451 rmmod nvme_fabrics 00:13:04.451 rmmod nvme_keyring 00:13:04.451 16:09:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.451 16:09:05 -- nvmf/common.sh@124 -- # set -e 00:13:04.451 16:09:05 -- nvmf/common.sh@125 -- # return 0 00:13:04.451 16:09:05 -- nvmf/common.sh@478 -- # '[' -n 3376730 ']' 00:13:04.451 16:09:05 -- nvmf/common.sh@479 -- # killprocess 3376730 00:13:04.451 16:09:05 -- common/autotest_common.sh@936 -- # '[' -z 3376730 ']' 00:13:04.451 16:09:05 -- common/autotest_common.sh@940 -- # kill -0 3376730 00:13:04.451 16:09:05 -- common/autotest_common.sh@941 -- # uname 00:13:04.709 16:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.709 16:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3376730 00:13:04.709 16:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:04.709 16:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:04.709 16:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3376730' 00:13:04.709 killing process with pid 3376730 00:13:04.709 16:09:05 -- common/autotest_common.sh@955 -- # kill 3376730 00:13:04.709 16:09:05 -- common/autotest_common.sh@960 -- # wait 3376730 00:13:04.967 16:09:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:04.967 16:09:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:04.967 16:09:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:04.967 16:09:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.967 16:09:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.967 16:09:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.967 16:09:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.967 16:09:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.872 16:09:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.872 00:13:06.872 real 0m16.606s 00:13:06.872 user 0m23.355s 00:13:06.872 sys 0m2.988s 00:13:06.872 16:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.872 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:06.872 ************************************ 00:13:06.872 END TEST nvmf_queue_depth 00:13:06.872 ************************************ 00:13:06.873 16:09:08 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:06.873 16:09:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:06.873 16:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.873 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:07.131 ************************************ 00:13:07.131 START TEST nvmf_multipath 00:13:07.131 ************************************ 00:13:07.131 16:09:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:07.131 * Looking for test storage... 00:13:07.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.131 16:09:08 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.131 16:09:08 -- nvmf/common.sh@7 -- # uname -s 00:13:07.132 16:09:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.132 16:09:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.132 16:09:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.132 16:09:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.132 16:09:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.132 16:09:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.132 16:09:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.132 16:09:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.132 16:09:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.132 16:09:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.132 16:09:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.132 16:09:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.132 16:09:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.132 16:09:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.132 16:09:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.132 16:09:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.132 16:09:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.132 16:09:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.132 16:09:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.132 16:09:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.132 16:09:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.132 16:09:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.132 16:09:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.132 16:09:08 -- paths/export.sh@5 -- # export PATH 00:13:07.132 16:09:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.132 16:09:08 -- nvmf/common.sh@47 -- # : 0 00:13:07.132 16:09:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.132 16:09:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.132 16:09:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.132 16:09:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.132 16:09:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.132 16:09:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.132 16:09:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.132 16:09:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.132 16:09:08 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.132 16:09:08 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.132 16:09:08 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:07.132 16:09:08 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.132 16:09:08 -- target/multipath.sh@43 -- # nvmftestinit 00:13:07.132 16:09:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:07.132 16:09:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.132 16:09:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:07.132 16:09:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:07.132 16:09:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:07.132 16:09:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.132 16:09:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.132 16:09:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.132 16:09:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:07.132 16:09:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:07.132 16:09:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.132 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:09.035 16:09:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.035 16:09:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.035 16:09:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.035 16:09:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.035 16:09:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.035 16:09:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.035 16:09:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.035 16:09:10 -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.035 16:09:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.035 16:09:10 -- nvmf/common.sh@296 -- # e810=() 00:13:09.035 16:09:10 -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.035 16:09:10 -- nvmf/common.sh@297 -- # x722=() 00:13:09.035 16:09:10 -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.035 16:09:10 -- nvmf/common.sh@298 -- # mlx=() 00:13:09.035 16:09:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.035 16:09:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.035 16:09:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.035 16:09:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.035 16:09:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.035 16:09:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.035 16:09:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:09.035 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:09.035 16:09:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.035 16:09:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:09.035 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:09.035 16:09:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.035 16:09:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.036 16:09:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.036 16:09:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.036 16:09:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:09.036 16:09:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.036 16:09:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:09.036 Found net devices under 0000:09:00.0: cvl_0_0 00:13:09.036 16:09:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.036 16:09:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.036 16:09:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.036 16:09:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:09.036 16:09:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.036 16:09:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:09.036 Found net devices under 0000:09:00.1: cvl_0_1 00:13:09.036 16:09:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.036 16:09:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:09.036 16:09:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:09.036 16:09:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:09.036 16:09:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.036 16:09:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.036 16:09:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.036 16:09:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.036 16:09:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.036 16:09:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.036 16:09:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.036 16:09:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.036 16:09:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.036 16:09:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.036 16:09:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.036 16:09:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.036 16:09:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.036 16:09:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.036 16:09:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.036 16:09:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.036 16:09:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.036 16:09:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.036 16:09:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.036 16:09:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:13:09.036 00:13:09.036 --- 10.0.0.2 ping statistics --- 00:13:09.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.036 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:09.036 16:09:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:13:09.036 00:13:09.036 --- 10.0.0.1 ping statistics --- 00:13:09.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.036 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:09.036 16:09:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.036 16:09:10 -- nvmf/common.sh@411 -- # return 0 00:13:09.036 16:09:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:09.036 16:09:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.036 16:09:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:09.036 16:09:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.036 16:09:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:09.036 16:09:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:09.036 16:09:10 -- target/multipath.sh@45 -- # '[' -z ']' 00:13:09.036 16:09:10 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:09.036 only one NIC for nvmf test 00:13:09.036 16:09:10 -- target/multipath.sh@47 -- # nvmftestfini 00:13:09.036 16:09:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:09.036 16:09:10 -- nvmf/common.sh@117 -- # sync 00:13:09.036 16:09:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.036 16:09:10 -- nvmf/common.sh@120 -- # set +e 00:13:09.036 16:09:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.036 16:09:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.036 rmmod nvme_tcp 00:13:09.297 rmmod nvme_fabrics 00:13:09.297 rmmod nvme_keyring 00:13:09.297 16:09:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.297 16:09:10 -- nvmf/common.sh@124 -- # set -e 00:13:09.297 16:09:10 -- nvmf/common.sh@125 -- # return 0 00:13:09.297 16:09:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:09.297 16:09:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:09.297 16:09:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:09.297 16:09:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:09.297 16:09:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.297 16:09:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.297 16:09:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.297 16:09:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.297 16:09:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.203 16:09:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.203 16:09:12 -- target/multipath.sh@48 -- # exit 0 00:13:11.203 16:09:12 -- target/multipath.sh@1 -- # nvmftestfini 00:13:11.203 16:09:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:11.203 16:09:12 -- nvmf/common.sh@117 -- # sync 00:13:11.203 16:09:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.203 16:09:12 -- nvmf/common.sh@120 -- # set +e 00:13:11.203 16:09:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.203 16:09:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.203 16:09:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.203 16:09:12 -- nvmf/common.sh@124 -- # set -e 00:13:11.203 16:09:12 -- nvmf/common.sh@125 -- # return 0 00:13:11.203 16:09:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:11.203 16:09:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:11.204 16:09:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:11.204 16:09:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:11.204 16:09:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.204 16:09:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.204 16:09:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.204 16:09:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.204 16:09:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.204 16:09:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.204 00:13:11.204 real 0m4.201s 00:13:11.204 user 0m0.764s 00:13:11.204 sys 0m1.437s 00:13:11.204 16:09:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.204 16:09:12 -- common/autotest_common.sh@10 -- # set +x 00:13:11.204 ************************************ 00:13:11.204 END TEST nvmf_multipath 00:13:11.204 ************************************ 00:13:11.204 16:09:12 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:11.204 16:09:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:11.204 16:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.204 16:09:12 -- common/autotest_common.sh@10 -- # set +x 00:13:11.461 ************************************ 00:13:11.461 START TEST nvmf_zcopy 00:13:11.461 ************************************ 00:13:11.461 16:09:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:11.461 * Looking for test storage... 00:13:11.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.462 16:09:12 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.462 16:09:12 -- nvmf/common.sh@7 -- # uname -s 00:13:11.462 16:09:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.462 16:09:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.462 16:09:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.462 16:09:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.462 16:09:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.462 16:09:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.462 16:09:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.462 16:09:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.462 16:09:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.462 16:09:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.462 16:09:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:11.462 16:09:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:11.462 16:09:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.462 16:09:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.462 16:09:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.462 16:09:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.462 16:09:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.462 16:09:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.462 16:09:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.462 16:09:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.462 16:09:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.462 16:09:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.462 16:09:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.462 16:09:12 -- paths/export.sh@5 -- # export PATH 00:13:11.462 16:09:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.462 16:09:12 -- nvmf/common.sh@47 -- # : 0 00:13:11.462 16:09:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.462 16:09:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.462 16:09:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.462 16:09:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.462 16:09:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.462 16:09:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.462 16:09:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.462 16:09:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.462 16:09:12 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:11.462 16:09:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:11.462 16:09:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.462 16:09:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:11.462 16:09:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:11.462 16:09:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:11.462 16:09:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.462 16:09:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.462 16:09:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.462 16:09:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:11.462 16:09:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:11.462 16:09:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.462 16:09:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.376 16:09:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:13.376 16:09:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.376 16:09:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.376 16:09:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.376 16:09:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.376 16:09:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.376 16:09:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.376 16:09:14 -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.376 16:09:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.376 16:09:14 -- nvmf/common.sh@296 -- # e810=() 00:13:13.376 16:09:14 -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.376 16:09:14 -- nvmf/common.sh@297 -- # x722=() 00:13:13.376 16:09:14 -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.376 16:09:14 -- nvmf/common.sh@298 -- # mlx=() 00:13:13.376 16:09:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.376 16:09:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.376 16:09:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.376 16:09:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:13.376 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:13.376 16:09:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.376 16:09:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:13.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:13.376 16:09:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.376 16:09:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.376 16:09:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.376 16:09:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:13.376 Found net devices under 0000:09:00.0: cvl_0_0 00:13:13.376 16:09:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.376 16:09:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.376 16:09:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.376 16:09:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:13.376 Found net devices under 0000:09:00.1: cvl_0_1 00:13:13.376 16:09:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:13.376 16:09:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:13.376 16:09:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.376 16:09:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.376 16:09:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.376 16:09:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.376 16:09:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.376 16:09:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.376 16:09:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.376 16:09:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.376 16:09:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.376 16:09:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.376 16:09:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.376 16:09:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.376 16:09:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.376 16:09:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.376 16:09:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.376 16:09:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.376 16:09:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.376 16:09:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.376 16:09:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:13:13.376 00:13:13.376 --- 10.0.0.2 ping statistics --- 00:13:13.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.376 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:13.376 16:09:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:13:13.376 00:13:13.376 --- 10.0.0.1 ping statistics --- 00:13:13.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.376 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:13.376 16:09:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.376 16:09:14 -- nvmf/common.sh@411 -- # return 0 00:13:13.376 16:09:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:13.376 16:09:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.376 16:09:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:13.376 16:09:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.376 16:09:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:13.376 16:09:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:13.376 16:09:14 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:13.376 16:09:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:13.376 16:09:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:13.376 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:13:13.376 16:09:14 -- nvmf/common.sh@470 -- # nvmfpid=3382693 00:13:13.376 16:09:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:13.376 16:09:14 -- nvmf/common.sh@471 -- # waitforlisten 3382693 00:13:13.376 16:09:14 -- common/autotest_common.sh@817 -- # '[' -z 3382693 ']' 00:13:13.376 16:09:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.376 16:09:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:13.376 16:09:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.376 16:09:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:13.376 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:13:13.665 [2024-04-24 16:09:14.665254] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:13:13.665 [2024-04-24 16:09:14.665331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.665 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.665 [2024-04-24 16:09:14.736406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.665 [2024-04-24 16:09:14.852238] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.665 [2024-04-24 16:09:14.852309] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.665 [2024-04-24 16:09:14.852326] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.665 [2024-04-24 16:09:14.852341] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.665 [2024-04-24 16:09:14.852353] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.665 [2024-04-24 16:09:14.852390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.992 16:09:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.992 16:09:14 -- common/autotest_common.sh@850 -- # return 0 00:13:13.992 16:09:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.992 16:09:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.992 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 16:09:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.992 16:09:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:13.992 16:09:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:13.992 16:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 [2024-04-24 16:09:15.005369] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.992 16:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.992 16:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 [2024-04-24 16:09:15.021579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.992 16:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:13.992 16:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 malloc0 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:13.992 16:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.992 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 16:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.992 16:09:15 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:13.992 16:09:15 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:13.993 16:09:15 -- nvmf/common.sh@521 -- # config=() 00:13:13.993 16:09:15 -- nvmf/common.sh@521 -- # local subsystem config 00:13:13.993 16:09:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:13.993 16:09:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:13.993 { 00:13:13.993 "params": { 00:13:13.993 "name": "Nvme$subsystem", 00:13:13.993 "trtype": "$TEST_TRANSPORT", 00:13:13.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.993 "adrfam": "ipv4", 00:13:13.993 "trsvcid": "$NVMF_PORT", 00:13:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.993 "hdgst": ${hdgst:-false}, 00:13:13.993 "ddgst": ${ddgst:-false} 00:13:13.993 }, 00:13:13.993 "method": "bdev_nvme_attach_controller" 00:13:13.993 } 00:13:13.993 EOF 00:13:13.993 )") 00:13:13.993 16:09:15 -- nvmf/common.sh@543 -- # cat 00:13:13.993 16:09:15 -- nvmf/common.sh@545 -- # jq . 00:13:13.993 16:09:15 -- nvmf/common.sh@546 -- # IFS=, 00:13:13.993 16:09:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:13.993 "params": { 00:13:13.993 "name": "Nvme1", 00:13:13.993 "trtype": "tcp", 00:13:13.993 "traddr": "10.0.0.2", 00:13:13.993 "adrfam": "ipv4", 00:13:13.993 "trsvcid": "4420", 00:13:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.993 "hdgst": false, 00:13:13.993 "ddgst": false 00:13:13.993 }, 00:13:13.993 "method": "bdev_nvme_attach_controller" 00:13:13.993 }' 00:13:13.993 [2024-04-24 16:09:15.105396] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:13:13.993 [2024-04-24 16:09:15.105478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382722 ] 00:13:13.993 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.993 [2024-04-24 16:09:15.174156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.252 [2024-04-24 16:09:15.287687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.252 Running I/O for 10 seconds... 00:13:24.241 00:13:24.241 Latency(us) 00:13:24.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.241 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:24.241 Verification LBA range: start 0x0 length 0x1000 00:13:24.241 Nvme1n1 : 10.02 5631.37 44.00 0.00 0.00 22667.86 3301.07 33204.91 00:13:24.241 =================================================================================================================== 00:13:24.241 Total : 5631.37 44.00 0.00 0.00 22667.86 3301.07 33204.91 00:13:24.501 16:09:25 -- target/zcopy.sh@39 -- # perfpid=3384026 00:13:24.501 16:09:25 -- target/zcopy.sh@41 -- # xtrace_disable 00:13:24.501 16:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 16:09:25 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:24.501 16:09:25 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:24.501 16:09:25 -- nvmf/common.sh@521 -- # config=() 00:13:24.501 16:09:25 -- nvmf/common.sh@521 -- # local subsystem config 00:13:24.501 16:09:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:24.501 16:09:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:24.501 { 00:13:24.501 "params": { 00:13:24.501 "name": "Nvme$subsystem", 00:13:24.501 "trtype": "$TEST_TRANSPORT", 00:13:24.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.501 "adrfam": "ipv4", 00:13:24.501 "trsvcid": "$NVMF_PORT", 00:13:24.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.501 "hdgst": ${hdgst:-false}, 00:13:24.501 "ddgst": ${ddgst:-false} 00:13:24.501 }, 00:13:24.501 "method": "bdev_nvme_attach_controller" 00:13:24.501 } 00:13:24.501 EOF 00:13:24.501 )") 00:13:24.501 [2024-04-24 16:09:25.786039] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.501 [2024-04-24 16:09:25.786082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.501 16:09:25 -- nvmf/common.sh@543 -- # cat 00:13:24.760 16:09:25 -- nvmf/common.sh@545 -- # jq . 00:13:24.760 16:09:25 -- nvmf/common.sh@546 -- # IFS=, 00:13:24.760 16:09:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:24.760 "params": { 00:13:24.760 "name": "Nvme1", 00:13:24.760 "trtype": "tcp", 00:13:24.760 "traddr": "10.0.0.2", 00:13:24.760 "adrfam": "ipv4", 00:13:24.760 "trsvcid": "4420", 00:13:24.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.760 "hdgst": false, 00:13:24.760 "ddgst": false 00:13:24.760 }, 00:13:24.760 "method": "bdev_nvme_attach_controller" 00:13:24.760 }' 00:13:24.760 [2024-04-24 16:09:25.793991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.794033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.802008] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.802045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.810042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.810065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.818062] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.818084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.824536] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:13:24.760 [2024-04-24 16:09:25.824592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384026 ] 00:13:24.760 [2024-04-24 16:09:25.826067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.826101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.834121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.834141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.842127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.842147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.850151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.850172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.760 [2024-04-24 16:09:25.858174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.858200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.866195] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.866220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.874202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.874223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.882224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.882244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.886653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.760 [2024-04-24 16:09:25.890250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.890272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.898301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.898332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.906297] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.906320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.914315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.914350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.922333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.922353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.930355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.930386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.938378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.938398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.946407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.946431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.954454] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.954488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.962443] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.760 [2024-04-24 16:09:25.962463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.760 [2024-04-24 16:09:25.970464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:25.970483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:25.978484] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:25.978504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:25.986504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:25.986523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:25.994527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:25.994547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:25.995936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.761 [2024-04-24 16:09:26.002547] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.002567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:26.010591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.010617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:26.018627] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.018660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:26.026661] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.026692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:26.034668] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.034701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.761 [2024-04-24 16:09:26.042704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.761 [2024-04-24 16:09:26.042765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.050755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.050795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.058752] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.058782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.066782] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.066818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.074835] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.074884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.082840] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.082881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.090817] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.090839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.098829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.098850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.106866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.106891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.114881] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.114905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.122905] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.122928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.130943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.130967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.138950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.138988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.146972] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.146996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.154994] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.155015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.163032] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.163053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.171055] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.171088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.179074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.179094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.187099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.187122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.195121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.195141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.203145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.203166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.211149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.211169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.219175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.219195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.227202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.227247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.235220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.235256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.243243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.020 [2024-04-24 16:09:26.243263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.020 [2024-04-24 16:09:26.251265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.251284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 [2024-04-24 16:09:26.259302] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.259322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 [2024-04-24 16:09:26.267311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.267330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 [2024-04-24 16:09:26.275335] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.275355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 [2024-04-24 16:09:26.283355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.283374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 [2024-04-24 16:09:26.291398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.291422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.021 Running I/O for 5 seconds... 00:13:25.021 [2024-04-24 16:09:26.299437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.021 [2024-04-24 16:09:26.299460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.311302] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.311331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.321065] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.321094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.331905] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.331934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.344491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.344518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.356226] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.356254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.365379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.365406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.377626] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.377657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.389628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.389659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.401194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.401225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.414693] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.414732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.425404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.425434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.436980] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.437007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.448373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.448403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.459756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.459801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.471167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.471198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.484900] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.484927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.495464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.495495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.507086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.507117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.518802] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.518829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.530098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.530128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.541342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.541372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.281 [2024-04-24 16:09:26.552924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.281 [2024-04-24 16:09:26.552952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.566837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.566866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.577882] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.577909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.590141] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.590171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.601715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.601754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.615251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.615282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.626594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.626625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.637999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.638057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.651536] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.651567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.662495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.662525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.673982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.674010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.687188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.687218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.697068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.697095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.708188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.708219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.720472] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.720502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.732245] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.732276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.744198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.744228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.756101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.756132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.767758] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.767814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.779646] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.540 [2024-04-24 16:09:26.779677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.540 [2024-04-24 16:09:26.791522] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.541 [2024-04-24 16:09:26.791553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.541 [2024-04-24 16:09:26.803416] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.541 [2024-04-24 16:09:26.803447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.541 [2024-04-24 16:09:26.814899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.541 [2024-04-24 16:09:26.814927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.826419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.826457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.838107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.838137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.851593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.851624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.862215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.862245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.874617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.874647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.885999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.886026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.897773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.897810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.909307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.909337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.920944] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.920972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.932842] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.932869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.944400] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.944431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.955591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.955621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.967243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.967273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.979036] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.979066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:26.990955] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:26.990982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.002503] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.002533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.014329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.014360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.026392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.026422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.037802] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.037829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.049248] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.049279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.060985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.061012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.072810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.072838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.801 [2024-04-24 16:09:27.084331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.801 [2024-04-24 16:09:27.084364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.096239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.096270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.107994] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.108021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.119370] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.119400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.130925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.130953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.142240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.142270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.153630] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.153660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.165728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.165768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.177556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.177586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.189384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.189415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.201325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.201357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.212570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.212601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.224111] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.224142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.236148] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.236179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.247864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.247891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.259546] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.259577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.271101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.271131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.282579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.282609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.294531] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.294562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.306249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.306279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.317951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.317978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.329843] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.329871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.061 [2024-04-24 16:09:27.343910] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.061 [2024-04-24 16:09:27.343938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.355398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.355430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.366857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.366884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.378671] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.378701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.390858] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.390886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.402789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.402816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.414447] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.414477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.427931] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.427958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.438677] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.438707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.450307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.450337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.461802] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.461830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.473175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.473205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.484934] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.484962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.496431] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.496461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.509906] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.509933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.520498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.520528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.532048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.532075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.543533] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.543564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.555095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.555125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.566708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.566738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.578286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.578316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.590342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.590373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.321 [2024-04-24 16:09:27.602197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.321 [2024-04-24 16:09:27.602228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.613918] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.613946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.625692] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.625723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.639196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.639227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.649689] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.649719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.662145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.662175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.673481] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.673511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.685000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.685028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.696595] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.696625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.708273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.708303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.720001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.720028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.731665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.731695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.743188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.743226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.755325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.755355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.767043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.767087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.778442] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.778472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.789890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.789918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.801681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.801712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.815440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.815471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.826423] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.826453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.838504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.838534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.851100] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.851130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.581 [2024-04-24 16:09:27.863369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.581 [2024-04-24 16:09:27.863404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.875358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.875389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.887154] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.887184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.900672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.900703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.911521] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.911552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.923670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.923700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.935469] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.935499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.947307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.947337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.958796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.958824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.970767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.970819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.982116] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.982147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:27.993106] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:27.993136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.004423] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.004452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.017662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.017693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.028049] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.028093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.040163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.040193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.051866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.051896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.063390] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.063420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.076408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.076438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.087152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.087182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.098522] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.098552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.110388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.110419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.841 [2024-04-24 16:09:28.122108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.841 [2024-04-24 16:09:28.122139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.133617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.133649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.145255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.145285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.156569] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.156598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.168352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.168382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.179993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.180020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.193414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.193453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.204357] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.204387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.215766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.215793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.228166] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.228196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.238933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.238960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.250218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.250248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.263643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.263673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.274867] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.274894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.286581] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.286612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.297963] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.297990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.309477] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.309506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.321335] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.321366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.332884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.332911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.344451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.344481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.356114] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.356145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.368107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.368138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.100 [2024-04-24 16:09:28.381735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.100 [2024-04-24 16:09:28.381794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.359 [2024-04-24 16:09:28.392720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.359 [2024-04-24 16:09:28.392760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.359 [2024-04-24 16:09:28.404419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.359 [2024-04-24 16:09:28.404450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.359 [2024-04-24 16:09:28.416051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.359 [2024-04-24 16:09:28.416091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.359 [2024-04-24 16:09:28.427523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.359 [2024-04-24 16:09:28.427553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.359 [2024-04-24 16:09:28.439119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.439150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.452398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.452428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.463122] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.463153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.475041] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.475087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.486716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.486757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.498487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.498516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.510004] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.510031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.521555] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.521585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.532739] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.532796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.543824] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.543851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.556764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.556806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.566974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.567002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.577875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.577903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.590915] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.590942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.601007] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.601049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.613102] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.613133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.624651] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.624682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.360 [2024-04-24 16:09:28.635980] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.360 [2024-04-24 16:09:28.636008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.647918] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.647947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.659563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.659593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.670921] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.670949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.682440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.682470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.694044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.694074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.705686] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.705717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.716968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.716995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.728353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.728382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.742077] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.742107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.752897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.752926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.764833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.764861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.775721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.775759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.787596] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.787626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.798884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.798911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.810912] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.810939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.822602] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.822632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.833943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.833970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.844439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.844467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.855194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.855221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.867997] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.868024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.878508] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.878536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.889365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.889392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.621 [2024-04-24 16:09:28.900117] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.621 [2024-04-24 16:09:28.900144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.911018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.911047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.923609] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.923636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.931974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.932002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.944631] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.944658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.954356] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.954383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.964851] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.964879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.975358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.975400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.985526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.985553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:28.995900] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:28.995927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.006329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.006356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.016584] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.016611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.027240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.027267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.039305] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.039333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.048722] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.048760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.059558] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.059587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.070448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.070476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.081172] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.081199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.092342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.092369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.103162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.103189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.115896] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.115923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.125295] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.125322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.136585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.136613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.147289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.147317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.880 [2024-04-24 16:09:29.158219] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.880 [2024-04-24 16:09:29.158247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.169204] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.169232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.182380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.182407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.192874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.192900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.203826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.203853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.214514] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.214541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.225224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.225252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.235713] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.235740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.246534] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.246561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.259049] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.259076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.269208] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.269235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.279852] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.279878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.291046] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.291073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.301902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.301930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.312807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.312833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.323716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.323751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.335241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.335272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.347358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.347388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.359155] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.359185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.370641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.370671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.382586] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.382616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.394378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.394408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.408358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.408388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.140 [2024-04-24 16:09:29.419419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.140 [2024-04-24 16:09:29.419449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.431330] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.431362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.443394] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.443424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.455337] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.455368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.466718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.466757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.478373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.478414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.490048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.490078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.503798] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.503825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.514559] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.514589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.526608] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.526638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.538325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.538355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.549974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.550001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.561375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.561404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.573000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.573043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.584916] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.584943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.596401] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.596431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.607830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.607858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.619480] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.619511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.630841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.630875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.642441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.642471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.654283] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.654314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.665619] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.665648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.400 [2024-04-24 16:09:29.677291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.400 [2024-04-24 16:09:29.677322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.688953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.688982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.699885] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.699921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.711340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.711369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.722926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.722954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.734485] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.734515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.745897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.745925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.757628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.661 [2024-04-24 16:09:29.757659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.661 [2024-04-24 16:09:29.769175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.769206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.780699] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.780729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.791984] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.792012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.803289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.803319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.814849] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.814876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.826238] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.826269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.837207] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.837238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.848688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.848718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.860131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.860162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.871599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.871629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.882902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.882930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.894372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.894401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.905868] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.905896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.917261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.917306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.928804] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.928831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.662 [2024-04-24 16:09:29.940256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.662 [2024-04-24 16:09:29.940286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:29.951607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:29.951639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:29.963098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:29.963130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:29.974295] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:29.974326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:29.985475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:29.985505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:29.997002] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:29.997044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.008978] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.009008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.020628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.020658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.032228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.032259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.048019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.048050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.063031] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.063061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.080161] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.080194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.091371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.091403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.103274] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.103302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.115121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.115148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.126758] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.126801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.140221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.140251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.150861] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.150899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.163202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.163233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.175336] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.175366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.186861] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.186889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.921 [2024-04-24 16:09:30.198234] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.921 [2024-04-24 16:09:30.198265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.210115] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.210146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.221672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.221702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.232773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.232815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.244459] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.244489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.255767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.255810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.180 [2024-04-24 16:09:30.267502] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.180 [2024-04-24 16:09:30.267533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.279196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.279226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.290843] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.290870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.304116] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.304148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.314635] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.314664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.326599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.326630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.337489] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.337519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.350064] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.350101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.359389] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.359416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.370751] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.370778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.383620] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.383647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.393304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.393332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.404568] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.404595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.415344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.415371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.426243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.426271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.438962] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.438989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.449284] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.449311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.181 [2024-04-24 16:09:30.460370] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.181 [2024-04-24 16:09:30.460397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.471594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.471622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.482574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.482601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.495445] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.495473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.505846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.505872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.516554] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.516582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.529088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.529115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.538362] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.538389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.549402] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.549429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.559895] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.559921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.570776] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.570803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.583439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.583467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.593152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.593179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.604379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.604406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.615108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.615135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.625695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.625722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.638028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.638055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.647737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.647773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.658723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.658759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.669413] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.669441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.680058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.680086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.690982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.691009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.701795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.701821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.712677] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.712704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.440 [2024-04-24 16:09:30.723659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.440 [2024-04-24 16:09:30.723687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.734858] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.734886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.745762] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.745796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.756876] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.756904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.767665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.767691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.778418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.778445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.791145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.791172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.808462] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.808491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.818795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.818822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.829985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.830013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.840726] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.840762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.851403] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.851430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.862087] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.862115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.872755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.872781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.885440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.885467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.895439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.895466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.906069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.906097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.919929] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.919956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.930950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.930978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.942328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.942358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.953814] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.953842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.965596] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.700 [2024-04-24 16:09:30.965626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.700 [2024-04-24 16:09:30.976854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.701 [2024-04-24 16:09:30.976881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:30.990350] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:30.990381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.001485] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.001515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.012721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.012761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.026418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.026448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.037339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.037369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.048780] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.048810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.060168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.060199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.071364] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.071394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.082490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.082520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.094155] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.094185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.105732] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.105771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.117134] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.117164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.128731] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.128770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.140438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.140468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.152068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.152099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.165543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.165574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.175998] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.176040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.188293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.188324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.199900] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.199927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.211513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.211543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.225140] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.225179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.962 [2024-04-24 16:09:31.235889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.962 [2024-04-24 16:09:31.235917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.247870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.247905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.259374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.259404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.270676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.270706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.282441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.282471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.293874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.293901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.305537] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.305567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.317367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.222 [2024-04-24 16:09:31.317397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.222 [2024-04-24 16:09:31.323894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.323920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 00:13:30.223 Latency(us) 00:13:30.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.223 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:30.223 Nvme1n1 : 5.01 11163.32 87.21 0.00 0.00 11450.18 4854.52 19223.89 00:13:30.223 =================================================================================================================== 00:13:30.223 Total : 11163.32 87.21 0.00 0.00 11450.18 4854.52 19223.89 00:13:30.223 [2024-04-24 16:09:31.331911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.331935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.339931] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.339955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.347968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.347996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.356031] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.356074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.364060] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.364106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.372086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.372134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.380093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.380153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.388115] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.388159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.396138] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.396182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.404158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.404201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.412184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.412229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.420202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.420246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.428232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.428279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.436254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.436319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.444290] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.444338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.452296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.452341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.460300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.460337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.468297] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.468322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.476320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.476345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.484343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.484368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.492366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.492389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.223 [2024-04-24 16:09:31.500430] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.223 [2024-04-24 16:09:31.500474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.508468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.508514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.516466] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.516508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.524454] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.524480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.532474] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.532514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.540495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.540519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.548507] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.548528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.556592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.556638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.564598] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.564640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.572597] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.572626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.580606] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.580630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 [2024-04-24 16:09:31.588627] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.483 [2024-04-24 16:09:31.588651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3384026) - No such process 00:13:30.483 16:09:31 -- target/zcopy.sh@49 -- # wait 3384026 00:13:30.483 16:09:31 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.483 16:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.483 16:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.483 16:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.483 16:09:31 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.483 16:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.483 16:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.483 delay0 00:13:30.483 16:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.483 16:09:31 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:30.483 16:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.483 16:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.483 16:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.483 16:09:31 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:30.483 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.483 [2024-04-24 16:09:31.745943] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:37.048 Initializing NVMe Controllers 00:13:37.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.048 Initialization complete. Launching workers. 00:13:37.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 5585 00:13:37.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5815, failed to submit 60 00:13:37.048 success 5655, unsuccess 160, failed 0 00:13:37.048 16:09:37 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:37.048 16:09:37 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:37.048 16:09:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:37.048 16:09:37 -- nvmf/common.sh@117 -- # sync 00:13:37.048 16:09:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.048 16:09:37 -- nvmf/common.sh@120 -- # set +e 00:13:37.048 16:09:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.048 16:09:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.048 rmmod nvme_tcp 00:13:37.048 rmmod nvme_fabrics 00:13:37.048 rmmod nvme_keyring 00:13:37.048 16:09:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.048 16:09:38 -- nvmf/common.sh@124 -- # set -e 00:13:37.048 16:09:38 -- nvmf/common.sh@125 -- # return 0 00:13:37.048 16:09:38 -- nvmf/common.sh@478 -- # '[' -n 3382693 ']' 00:13:37.048 16:09:38 -- nvmf/common.sh@479 -- # killprocess 3382693 00:13:37.048 16:09:38 -- common/autotest_common.sh@936 -- # '[' -z 3382693 ']' 00:13:37.049 16:09:38 -- common/autotest_common.sh@940 -- # kill -0 3382693 00:13:37.049 16:09:38 -- common/autotest_common.sh@941 -- # uname 00:13:37.049 16:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.049 16:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3382693 00:13:37.049 16:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:37.049 16:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:37.049 16:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3382693' 00:13:37.049 killing process with pid 3382693 00:13:37.049 16:09:38 -- common/autotest_common.sh@955 -- # kill 3382693 00:13:37.049 16:09:38 -- common/autotest_common.sh@960 -- # wait 3382693 00:13:37.308 16:09:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:37.308 16:09:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:37.308 16:09:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:37.308 16:09:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.308 16:09:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.309 16:09:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.309 16:09:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.309 16:09:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.215 16:09:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.215 00:13:39.215 real 0m27.848s 00:13:39.215 user 0m41.030s 00:13:39.215 sys 0m8.519s 00:13:39.215 16:09:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.215 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 ************************************ 00:13:39.215 END TEST nvmf_zcopy 00:13:39.215 ************************************ 00:13:39.215 16:09:40 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:39.215 16:09:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.215 16:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.215 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:39.474 ************************************ 00:13:39.474 START TEST nvmf_nmic 00:13:39.474 ************************************ 00:13:39.474 16:09:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:39.474 * Looking for test storage... 00:13:39.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.474 16:09:40 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.474 16:09:40 -- nvmf/common.sh@7 -- # uname -s 00:13:39.474 16:09:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.474 16:09:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.474 16:09:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.474 16:09:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.474 16:09:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.474 16:09:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.474 16:09:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.474 16:09:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.474 16:09:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.474 16:09:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.474 16:09:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:39.474 16:09:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:39.474 16:09:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.474 16:09:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.474 16:09:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.474 16:09:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.474 16:09:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.474 16:09:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.474 16:09:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.474 16:09:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.474 16:09:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.474 16:09:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.474 16:09:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.474 16:09:40 -- paths/export.sh@5 -- # export PATH 00:13:39.474 16:09:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.474 16:09:40 -- nvmf/common.sh@47 -- # : 0 00:13:39.474 16:09:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.474 16:09:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.474 16:09:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.474 16:09:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.474 16:09:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.474 16:09:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.474 16:09:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.474 16:09:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.474 16:09:40 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.474 16:09:40 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.474 16:09:40 -- target/nmic.sh@14 -- # nvmftestinit 00:13:39.474 16:09:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:39.474 16:09:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.474 16:09:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:39.474 16:09:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:39.474 16:09:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:39.474 16:09:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.474 16:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.474 16:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.474 16:09:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:39.474 16:09:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:39.474 16:09:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.474 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:41.375 16:09:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.375 16:09:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.375 16:09:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.375 16:09:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.375 16:09:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.375 16:09:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.375 16:09:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.375 16:09:42 -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.375 16:09:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.375 16:09:42 -- nvmf/common.sh@296 -- # e810=() 00:13:41.375 16:09:42 -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.375 16:09:42 -- nvmf/common.sh@297 -- # x722=() 00:13:41.375 16:09:42 -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.375 16:09:42 -- nvmf/common.sh@298 -- # mlx=() 00:13:41.375 16:09:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.375 16:09:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.375 16:09:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.375 16:09:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.375 16:09:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.375 16:09:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.375 16:09:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.375 16:09:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.375 16:09:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.375 16:09:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:41.376 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:41.376 16:09:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.376 16:09:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:41.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:41.376 16:09:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.376 16:09:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.376 16:09:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.376 16:09:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.376 16:09:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.376 16:09:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:41.376 Found net devices under 0000:09:00.0: cvl_0_0 00:13:41.376 16:09:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.376 16:09:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.376 16:09:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.376 16:09:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.376 16:09:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.376 16:09:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:41.376 Found net devices under 0000:09:00.1: cvl_0_1 00:13:41.376 16:09:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.376 16:09:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:41.376 16:09:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:41.376 16:09:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:41.376 16:09:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.376 16:09:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.376 16:09:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.376 16:09:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.376 16:09:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.376 16:09:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.376 16:09:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.376 16:09:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.376 16:09:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.376 16:09:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.376 16:09:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.376 16:09:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.376 16:09:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.376 16:09:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.376 16:09:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.376 16:09:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.376 16:09:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.376 16:09:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.376 16:09:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.376 16:09:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:13:41.376 00:13:41.376 --- 10.0.0.2 ping statistics --- 00:13:41.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.376 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:13:41.376 16:09:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:13:41.376 00:13:41.376 --- 10.0.0.1 ping statistics --- 00:13:41.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.376 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:41.376 16:09:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.376 16:09:42 -- nvmf/common.sh@411 -- # return 0 00:13:41.376 16:09:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:41.376 16:09:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.376 16:09:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:41.376 16:09:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.376 16:09:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:41.376 16:09:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:41.635 16:09:42 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:41.635 16:09:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:41.635 16:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:41.635 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 16:09:42 -- nvmf/common.sh@470 -- # nvmfpid=3387310 00:13:41.635 16:09:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.635 16:09:42 -- nvmf/common.sh@471 -- # waitforlisten 3387310 00:13:41.635 16:09:42 -- common/autotest_common.sh@817 -- # '[' -z 3387310 ']' 00:13:41.635 16:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.635 16:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.635 16:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.635 16:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.635 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 [2024-04-24 16:09:42.715360] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:13:41.635 [2024-04-24 16:09:42.715437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.635 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.635 [2024-04-24 16:09:42.780030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.635 [2024-04-24 16:09:42.885178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.635 [2024-04-24 16:09:42.885236] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.635 [2024-04-24 16:09:42.885266] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.635 [2024-04-24 16:09:42.885278] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.635 [2024-04-24 16:09:42.885288] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.635 [2024-04-24 16:09:42.885403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.635 [2024-04-24 16:09:42.885465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.635 [2024-04-24 16:09:42.885528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.635 [2024-04-24 16:09:42.885530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.894 16:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.894 16:09:43 -- common/autotest_common.sh@850 -- # return 0 00:13:41.894 16:09:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.894 16:09:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 16:09:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.894 16:09:43 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 [2024-04-24 16:09:43.042498] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 Malloc0 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 [2024-04-24 16:09:43.095897] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:41.894 test case1: single bdev can't be used in multiple subsystems 00:13:41.894 16:09:43 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@28 -- # nmic_status=0 00:13:41.894 16:09:43 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 [2024-04-24 16:09:43.119696] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:41.894 [2024-04-24 16:09:43.119740] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:41.894 [2024-04-24 16:09:43.119764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 request: 00:13:41.894 { 00:13:41.894 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:41.894 "namespace": { 00:13:41.894 "bdev_name": "Malloc0", 00:13:41.894 "no_auto_visible": false 00:13:41.894 }, 00:13:41.894 "method": "nvmf_subsystem_add_ns", 00:13:41.894 "req_id": 1 00:13:41.894 } 00:13:41.894 Got JSON-RPC error response 00:13:41.894 response: 00:13:41.894 { 00:13:41.894 "code": -32602, 00:13:41.894 "message": "Invalid parameters" 00:13:41.894 } 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@29 -- # nmic_status=1 00:13:41.894 16:09:43 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:41.894 16:09:43 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:41.894 Adding namespace failed - expected result. 00:13:41.894 16:09:43 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:41.894 test case2: host connect to nvmf target in multiple paths 00:13:41.894 16:09:43 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:41.894 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.894 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 [2024-04-24 16:09:43.127848] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:41.894 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.894 16:09:43 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.830 16:09:43 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:43.089 16:09:44 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:43.089 16:09:44 -- common/autotest_common.sh@1184 -- # local i=0 00:13:43.089 16:09:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.089 16:09:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:43.089 16:09:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:45.623 16:09:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:45.623 16:09:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:45.623 16:09:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.623 16:09:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:45.623 16:09:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.623 16:09:46 -- common/autotest_common.sh@1194 -- # return 0 00:13:45.623 16:09:46 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:45.623 [global] 00:13:45.623 thread=1 00:13:45.623 invalidate=1 00:13:45.623 rw=write 00:13:45.623 time_based=1 00:13:45.623 runtime=1 00:13:45.623 ioengine=libaio 00:13:45.623 direct=1 00:13:45.623 bs=4096 00:13:45.623 iodepth=1 00:13:45.623 norandommap=0 00:13:45.623 numjobs=1 00:13:45.623 00:13:45.623 verify_dump=1 00:13:45.623 verify_backlog=512 00:13:45.623 verify_state_save=0 00:13:45.623 do_verify=1 00:13:45.623 verify=crc32c-intel 00:13:45.623 [job0] 00:13:45.623 filename=/dev/nvme0n1 00:13:45.623 Could not set queue depth (nvme0n1) 00:13:45.623 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.623 fio-3.35 00:13:45.623 Starting 1 thread 00:13:46.560 00:13:46.560 job0: (groupid=0, jobs=1): err= 0: pid=3387939: Wed Apr 24 16:09:47 2024 00:13:46.560 read: IOPS=511, BW=2047KiB/s (2096kB/s)(2108KiB/1030msec) 00:13:46.560 slat (nsec): min=4684, max=53076, avg=15000.52, stdev=9818.99 00:13:46.560 clat (usec): min=269, max=41120, avg=1495.03, stdev=6764.93 00:13:46.560 lat (usec): min=274, max=41127, avg=1510.03, stdev=6766.56 00:13:46.560 clat percentiles (usec): 00:13:46.560 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:13:46.560 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:13:46.560 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 453], 00:13:46.560 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:46.560 | 99.99th=[41157] 00:13:46.560 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:13:46.560 slat (nsec): min=5922, max=73518, avg=16665.40, stdev=8545.92 00:13:46.560 clat (usec): min=170, max=335, avg=205.08, stdev=19.51 00:13:46.560 lat (usec): min=177, max=370, avg=221.74, stdev=23.55 00:13:46.560 clat percentiles (usec): 00:13:46.560 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:13:46.560 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:13:46.560 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 247], 00:13:46.560 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 334], 00:13:46.560 | 99.99th=[ 334] 00:13:46.560 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:46.560 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:46.560 lat (usec) : 250=63.19%, 500=35.40%, 750=0.45% 00:13:46.560 lat (msec) : 50=0.97% 00:13:46.560 cpu : usr=0.97%, sys=3.01%, ctx=1551, majf=0, minf=2 00:13:46.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.560 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.560 00:13:46.560 Run status group 0 (all jobs): 00:13:46.560 READ: bw=2047KiB/s (2096kB/s), 2047KiB/s-2047KiB/s (2096kB/s-2096kB/s), io=2108KiB (2159kB), run=1030-1030msec 00:13:46.560 WRITE: bw=3977KiB/s (4072kB/s), 3977KiB/s-3977KiB/s (4072kB/s-4072kB/s), io=4096KiB (4194kB), run=1030-1030msec 00:13:46.560 00:13:46.560 Disk stats (read/write): 00:13:46.560 nvme0n1: ios=573/1024, merge=0/0, ticks=649/198, in_queue=847, util=91.98% 00:13:46.560 16:09:47 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:46.818 16:09:47 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.818 16:09:47 -- common/autotest_common.sh@1205 -- # local i=0 00:13:46.818 16:09:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:46.818 16:09:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.818 16:09:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:46.818 16:09:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.818 16:09:47 -- common/autotest_common.sh@1217 -- # return 0 00:13:46.818 16:09:47 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:46.818 16:09:47 -- target/nmic.sh@53 -- # nvmftestfini 00:13:46.818 16:09:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:46.818 16:09:47 -- nvmf/common.sh@117 -- # sync 00:13:46.818 16:09:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.818 16:09:47 -- nvmf/common.sh@120 -- # set +e 00:13:46.818 16:09:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.818 16:09:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.818 rmmod nvme_tcp 00:13:46.818 rmmod nvme_fabrics 00:13:46.818 rmmod nvme_keyring 00:13:46.818 16:09:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.818 16:09:47 -- nvmf/common.sh@124 -- # set -e 00:13:46.818 16:09:47 -- nvmf/common.sh@125 -- # return 0 00:13:46.818 16:09:47 -- nvmf/common.sh@478 -- # '[' -n 3387310 ']' 00:13:46.818 16:09:47 -- nvmf/common.sh@479 -- # killprocess 3387310 00:13:46.818 16:09:47 -- common/autotest_common.sh@936 -- # '[' -z 3387310 ']' 00:13:46.818 16:09:47 -- common/autotest_common.sh@940 -- # kill -0 3387310 00:13:46.818 16:09:47 -- common/autotest_common.sh@941 -- # uname 00:13:46.818 16:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.818 16:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3387310 00:13:46.818 16:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.818 16:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.818 16:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3387310' 00:13:46.818 killing process with pid 3387310 00:13:46.818 16:09:47 -- common/autotest_common.sh@955 -- # kill 3387310 00:13:46.818 16:09:47 -- common/autotest_common.sh@960 -- # wait 3387310 00:13:47.076 16:09:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:47.076 16:09:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:47.076 16:09:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:47.076 16:09:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.076 16:09:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.076 16:09:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.076 16:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.076 16:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.617 16:09:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.617 00:13:49.617 real 0m9.790s 00:13:49.617 user 0m22.037s 00:13:49.617 sys 0m2.310s 00:13:49.617 16:09:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.617 16:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 ************************************ 00:13:49.617 END TEST nvmf_nmic 00:13:49.617 ************************************ 00:13:49.618 16:09:50 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:49.618 16:09:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:49.618 16:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.618 16:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.618 ************************************ 00:13:49.618 START TEST nvmf_fio_target 00:13:49.618 ************************************ 00:13:49.618 16:09:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:49.618 * Looking for test storage... 00:13:49.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.618 16:09:50 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.618 16:09:50 -- nvmf/common.sh@7 -- # uname -s 00:13:49.618 16:09:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.618 16:09:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.618 16:09:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.618 16:09:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.618 16:09:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.618 16:09:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.618 16:09:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.618 16:09:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.618 16:09:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.618 16:09:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.618 16:09:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.618 16:09:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.618 16:09:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.618 16:09:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.618 16:09:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.618 16:09:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.618 16:09:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.618 16:09:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.618 16:09:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.618 16:09:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.618 16:09:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.618 16:09:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.618 16:09:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.618 16:09:50 -- paths/export.sh@5 -- # export PATH 00:13:49.618 16:09:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.618 16:09:50 -- nvmf/common.sh@47 -- # : 0 00:13:49.618 16:09:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.618 16:09:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.618 16:09:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.618 16:09:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.618 16:09:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.618 16:09:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.618 16:09:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.618 16:09:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.618 16:09:50 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.618 16:09:50 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.618 16:09:50 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.618 16:09:50 -- target/fio.sh@16 -- # nvmftestinit 00:13:49.618 16:09:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:49.618 16:09:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.618 16:09:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:49.618 16:09:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:49.618 16:09:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:49.618 16:09:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.618 16:09:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.618 16:09:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.618 16:09:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:49.618 16:09:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:49.618 16:09:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.618 16:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 16:09:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.583 16:09:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.583 16:09:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.583 16:09:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.583 16:09:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.583 16:09:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.583 16:09:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.583 16:09:52 -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.583 16:09:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.583 16:09:52 -- nvmf/common.sh@296 -- # e810=() 00:13:51.583 16:09:52 -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.583 16:09:52 -- nvmf/common.sh@297 -- # x722=() 00:13:51.583 16:09:52 -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.583 16:09:52 -- nvmf/common.sh@298 -- # mlx=() 00:13:51.583 16:09:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.583 16:09:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.583 16:09:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.583 16:09:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:51.583 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:51.583 16:09:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.583 16:09:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:51.583 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:51.583 16:09:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.583 16:09:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.583 16:09:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.583 16:09:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:51.583 Found net devices under 0000:09:00.0: cvl_0_0 00:13:51.583 16:09:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.583 16:09:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.583 16:09:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.583 16:09:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:51.583 Found net devices under 0000:09:00.1: cvl_0_1 00:13:51.583 16:09:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:51.583 16:09:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:51.583 16:09:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.583 16:09:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.583 16:09:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.583 16:09:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.583 16:09:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.583 16:09:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.583 16:09:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.583 16:09:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.583 16:09:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.583 16:09:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.583 16:09:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.583 16:09:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.583 16:09:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.583 16:09:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.583 16:09:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.583 16:09:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.583 16:09:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.583 16:09:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.583 16:09:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:13:51.583 00:13:51.583 --- 10.0.0.2 ping statistics --- 00:13:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.583 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:51.583 16:09:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:51.583 00:13:51.583 --- 10.0.0.1 ping statistics --- 00:13:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.583 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:51.583 16:09:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.583 16:09:52 -- nvmf/common.sh@411 -- # return 0 00:13:51.583 16:09:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:51.583 16:09:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.583 16:09:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:51.583 16:09:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.583 16:09:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:51.583 16:09:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:51.583 16:09:52 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:51.583 16:09:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:51.583 16:09:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:51.583 16:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 16:09:52 -- nvmf/common.sh@470 -- # nvmfpid=3390026 00:13:51.583 16:09:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.583 16:09:52 -- nvmf/common.sh@471 -- # waitforlisten 3390026 00:13:51.583 16:09:52 -- common/autotest_common.sh@817 -- # '[' -z 3390026 ']' 00:13:51.583 16:09:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.583 16:09:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.583 16:09:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.583 16:09:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.583 16:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 [2024-04-24 16:09:52.742804] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:13:51.583 [2024-04-24 16:09:52.742890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.583 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.583 [2024-04-24 16:09:52.809462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.842 [2024-04-24 16:09:52.919656] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.842 [2024-04-24 16:09:52.919713] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.842 [2024-04-24 16:09:52.919727] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.842 [2024-04-24 16:09:52.919738] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.842 [2024-04-24 16:09:52.919756] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.842 [2024-04-24 16:09:52.923769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.842 [2024-04-24 16:09:52.923813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.842 [2024-04-24 16:09:52.923845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.842 [2024-04-24 16:09:52.923849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.842 16:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.842 16:09:53 -- common/autotest_common.sh@850 -- # return 0 00:13:51.842 16:09:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:51.842 16:09:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.842 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.842 16:09:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.842 16:09:53 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.100 [2024-04-24 16:09:53.340227] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.100 16:09:53 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.669 16:09:53 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:52.669 16:09:53 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.927 16:09:53 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:52.927 16:09:53 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.197 16:09:54 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:53.197 16:09:54 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.456 16:09:54 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:53.456 16:09:54 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:53.716 16:09:54 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.974 16:09:55 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:53.974 16:09:55 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.232 16:09:55 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:54.232 16:09:55 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.489 16:09:55 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:54.489 16:09:55 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:54.489 16:09:55 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.747 16:09:56 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:54.747 16:09:56 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.005 16:09:56 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:55.005 16:09:56 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.262 16:09:56 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.521 [2024-04-24 16:09:56.720593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.521 16:09:56 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:55.779 16:09:56 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:56.036 16:09:57 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.606 16:09:57 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:56.606 16:09:57 -- common/autotest_common.sh@1184 -- # local i=0 00:13:56.606 16:09:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.606 16:09:57 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:56.606 16:09:57 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:56.606 16:09:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:59.145 16:09:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:59.145 16:09:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:59.145 16:09:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.145 16:09:59 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:59.145 16:09:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.145 16:09:59 -- common/autotest_common.sh@1194 -- # return 0 00:13:59.145 16:09:59 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:59.145 [global] 00:13:59.145 thread=1 00:13:59.145 invalidate=1 00:13:59.145 rw=write 00:13:59.145 time_based=1 00:13:59.145 runtime=1 00:13:59.145 ioengine=libaio 00:13:59.145 direct=1 00:13:59.145 bs=4096 00:13:59.145 iodepth=1 00:13:59.145 norandommap=0 00:13:59.145 numjobs=1 00:13:59.145 00:13:59.145 verify_dump=1 00:13:59.145 verify_backlog=512 00:13:59.145 verify_state_save=0 00:13:59.145 do_verify=1 00:13:59.145 verify=crc32c-intel 00:13:59.145 [job0] 00:13:59.145 filename=/dev/nvme0n1 00:13:59.145 [job1] 00:13:59.145 filename=/dev/nvme0n2 00:13:59.145 [job2] 00:13:59.145 filename=/dev/nvme0n3 00:13:59.145 [job3] 00:13:59.145 filename=/dev/nvme0n4 00:13:59.145 Could not set queue depth (nvme0n1) 00:13:59.145 Could not set queue depth (nvme0n2) 00:13:59.145 Could not set queue depth (nvme0n3) 00:13:59.145 Could not set queue depth (nvme0n4) 00:13:59.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.145 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.145 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.145 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.145 fio-3.35 00:13:59.145 Starting 4 threads 00:14:00.081 00:14:00.081 job0: (groupid=0, jobs=1): err= 0: pid=3391093: Wed Apr 24 16:10:01 2024 00:14:00.081 read: IOPS=1360, BW=5443KiB/s (5573kB/s)(5448KiB/1001msec) 00:14:00.081 slat (nsec): min=7083, max=52459, avg=14578.26, stdev=6391.70 00:14:00.081 clat (usec): min=287, max=1102, avg=376.41, stdev=45.87 00:14:00.081 lat (usec): min=296, max=1119, avg=390.99, stdev=48.00 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 347], 00:14:00.081 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:14:00.081 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 449], 00:14:00.081 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 807], 99.95th=[ 1106], 00:14:00.081 | 99.99th=[ 1106] 00:14:00.081 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:00.081 slat (usec): min=7, max=37459, avg=43.23, stdev=955.35 00:14:00.081 clat (usec): min=195, max=1622, avg=252.33, stdev=45.48 00:14:00.081 lat (usec): min=206, max=37790, avg=295.56, stdev=958.47 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:14:00.081 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:14:00.081 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:14:00.081 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 709], 99.95th=[ 1631], 00:14:00.081 | 99.99th=[ 1631] 00:14:00.081 bw ( KiB/s): min= 7768, max= 7768, per=34.52%, avg=7768.00, stdev= 0.00, samples=1 00:14:00.081 iops : min= 1942, max= 1942, avg=1942.00, stdev= 0.00, samples=1 00:14:00.081 lat (usec) : 250=29.50%, 500=69.81%, 750=0.59%, 1000=0.03% 00:14:00.081 lat (msec) : 2=0.07% 00:14:00.081 cpu : usr=3.50%, sys=6.60%, ctx=2900, majf=0, minf=1 00:14:00.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 issued rwts: total=1362,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.081 job1: (groupid=0, jobs=1): err= 0: pid=3391094: Wed Apr 24 16:10:01 2024 00:14:00.081 read: IOPS=1463, BW=5854KiB/s (5995kB/s)(5860KiB/1001msec) 00:14:00.081 slat (nsec): min=5834, max=70316, avg=23989.48, stdev=12354.37 00:14:00.081 clat (usec): min=326, max=556, avg=412.06, stdev=37.07 00:14:00.081 lat (usec): min=333, max=576, avg=436.05, stdev=43.71 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 379], 00:14:00.081 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 420], 00:14:00.081 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 478], 00:14:00.081 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 553], 00:14:00.081 | 99.99th=[ 553] 00:14:00.081 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:00.081 slat (nsec): min=5997, max=66349, avg=11912.98, stdev=5515.61 00:14:00.081 clat (usec): min=177, max=1032, avg=212.91, stdev=42.76 00:14:00.081 lat (usec): min=186, max=1049, avg=224.82, stdev=43.52 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:14:00.081 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:14:00.081 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 237], 00:14:00.081 | 99.00th=[ 293], 99.50th=[ 461], 99.90th=[ 889], 99.95th=[ 1037], 00:14:00.081 | 99.99th=[ 1037] 00:14:00.081 bw ( KiB/s): min= 8192, max= 8192, per=36.40%, avg=8192.00, stdev= 0.00, samples=1 00:14:00.081 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:00.081 lat (usec) : 250=49.82%, 500=49.45%, 750=0.60%, 1000=0.10% 00:14:00.081 lat (msec) : 2=0.03% 00:14:00.081 cpu : usr=2.80%, sys=5.70%, ctx=3002, majf=0, minf=2 00:14:00.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 issued rwts: total=1465,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.081 job2: (groupid=0, jobs=1): err= 0: pid=3391095: Wed Apr 24 16:10:01 2024 00:14:00.081 read: IOPS=1361, BW=5447KiB/s (5577kB/s)(5452KiB/1001msec) 00:14:00.081 slat (nsec): min=5732, max=74966, avg=20591.07, stdev=11085.90 00:14:00.081 clat (usec): min=291, max=41067, avg=408.88, stdev=1103.43 00:14:00.081 lat (usec): min=297, max=41083, avg=429.47, stdev=1103.62 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:14:00.081 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 383], 00:14:00.081 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 441], 95.00th=[ 519], 00:14:00.081 | 99.00th=[ 545], 99.50th=[ 545], 99.90th=[ 619], 99.95th=[41157], 00:14:00.081 | 99.99th=[41157] 00:14:00.081 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:00.081 slat (nsec): min=6293, max=60141, avg=15227.66, stdev=6881.95 00:14:00.081 clat (usec): min=196, max=572, avg=244.92, stdev=24.64 00:14:00.081 lat (usec): min=205, max=588, avg=260.14, stdev=26.02 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:14:00.081 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:14:00.081 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:14:00.081 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 412], 99.95th=[ 570], 00:14:00.081 | 99.99th=[ 570] 00:14:00.081 bw ( KiB/s): min= 7912, max= 7912, per=35.16%, avg=7912.00, stdev= 0.00, samples=1 00:14:00.081 iops : min= 1978, max= 1978, avg=1978.00, stdev= 0.00, samples=1 00:14:00.081 lat (usec) : 250=37.56%, 500=59.61%, 750=2.79% 00:14:00.081 lat (msec) : 50=0.03% 00:14:00.081 cpu : usr=2.60%, sys=5.60%, ctx=2900, majf=0, minf=1 00:14:00.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 issued rwts: total=1363,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.081 job3: (groupid=0, jobs=1): err= 0: pid=3391096: Wed Apr 24 16:10:01 2024 00:14:00.081 read: IOPS=1019, BW=4080KiB/s (4178kB/s)(4084KiB/1001msec) 00:14:00.081 slat (nsec): min=6475, max=74729, avg=25941.36, stdev=12151.97 00:14:00.081 clat (usec): min=293, max=42099, avg=662.77, stdev=3126.51 00:14:00.081 lat (usec): min=308, max=42111, avg=688.71, stdev=3125.79 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 371], 00:14:00.081 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 433], 00:14:00.081 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 537], 00:14:00.081 | 99.00th=[ 635], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:14:00.081 | 99.99th=[42206] 00:14:00.081 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:00.081 slat (nsec): min=6384, max=69022, avg=16452.34, stdev=9093.21 00:14:00.081 clat (usec): min=190, max=676, avg=260.88, stdev=45.92 00:14:00.081 lat (usec): min=201, max=686, avg=277.33, stdev=47.58 00:14:00.081 clat percentiles (usec): 00:14:00.081 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:14:00.081 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:14:00.081 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 351], 00:14:00.081 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 474], 99.95th=[ 676], 00:14:00.081 | 99.99th=[ 676] 00:14:00.081 bw ( KiB/s): min= 4096, max= 4096, per=18.20%, avg=4096.00, stdev= 0.00, samples=1 00:14:00.081 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:00.081 lat (usec) : 250=25.92%, 500=67.97%, 750=5.77% 00:14:00.081 lat (msec) : 2=0.05%, 50=0.29% 00:14:00.081 cpu : usr=1.80%, sys=5.10%, ctx=2046, majf=0, minf=1 00:14:00.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.081 issued rwts: total=1021,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.081 00:14:00.081 Run status group 0 (all jobs): 00:14:00.081 READ: bw=20.3MiB/s (21.3MB/s), 4080KiB/s-5854KiB/s (4178kB/s-5995kB/s), io=20.4MiB (21.3MB), run=1001-1001msec 00:14:00.081 WRITE: bw=22.0MiB/s (23.0MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=22.0MiB (23.1MB), run=1001-1001msec 00:14:00.081 00:14:00.081 Disk stats (read/write): 00:14:00.081 nvme0n1: ios=1073/1502, merge=0/0, ticks=520/351, in_queue=871, util=85.47% 00:14:00.081 nvme0n2: ios=1181/1536, merge=0/0, ticks=789/319, in_queue=1108, util=89.32% 00:14:00.081 nvme0n3: ios=1081/1525, merge=0/0, ticks=474/353, in_queue=827, util=95.40% 00:14:00.081 nvme0n4: ios=767/1024, merge=0/0, ticks=1106/258, in_queue=1364, util=95.79% 00:14:00.081 16:10:01 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:00.081 [global] 00:14:00.081 thread=1 00:14:00.081 invalidate=1 00:14:00.081 rw=randwrite 00:14:00.081 time_based=1 00:14:00.081 runtime=1 00:14:00.081 ioengine=libaio 00:14:00.081 direct=1 00:14:00.081 bs=4096 00:14:00.081 iodepth=1 00:14:00.081 norandommap=0 00:14:00.081 numjobs=1 00:14:00.081 00:14:00.081 verify_dump=1 00:14:00.081 verify_backlog=512 00:14:00.081 verify_state_save=0 00:14:00.081 do_verify=1 00:14:00.081 verify=crc32c-intel 00:14:00.081 [job0] 00:14:00.081 filename=/dev/nvme0n1 00:14:00.081 [job1] 00:14:00.081 filename=/dev/nvme0n2 00:14:00.082 [job2] 00:14:00.082 filename=/dev/nvme0n3 00:14:00.082 [job3] 00:14:00.082 filename=/dev/nvme0n4 00:14:00.082 Could not set queue depth (nvme0n1) 00:14:00.082 Could not set queue depth (nvme0n2) 00:14:00.082 Could not set queue depth (nvme0n3) 00:14:00.082 Could not set queue depth (nvme0n4) 00:14:00.340 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.341 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.341 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.341 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.341 fio-3.35 00:14:00.341 Starting 4 threads 00:14:01.719 00:14:01.719 job0: (groupid=0, jobs=1): err= 0: pid=3391329: Wed Apr 24 16:10:02 2024 00:14:01.719 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:01.719 slat (nsec): min=6125, max=83094, avg=22002.38, stdev=10696.40 00:14:01.719 clat (usec): min=254, max=1105, avg=332.27, stdev=50.15 00:14:01.719 lat (usec): min=261, max=1136, avg=354.28, stdev=51.73 00:14:01.719 clat percentiles (usec): 00:14:01.719 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 297], 00:14:01.719 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:14:01.719 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 404], 00:14:01.719 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 963], 99.95th=[ 1106], 00:14:01.719 | 99.99th=[ 1106] 00:14:01.719 write: IOPS=1647, BW=6589KiB/s (6748kB/s)(6596KiB/1001msec); 0 zone resets 00:14:01.719 slat (nsec): min=7265, max=71954, avg=17636.40, stdev=9369.91 00:14:01.719 clat (usec): min=174, max=581, avg=247.97, stdev=72.44 00:14:01.719 lat (usec): min=183, max=620, avg=265.60, stdev=75.86 00:14:01.719 clat percentiles (usec): 00:14:01.719 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:14:01.719 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 227], 00:14:01.719 | 70.00th=[ 262], 80.00th=[ 306], 90.00th=[ 371], 95.00th=[ 404], 00:14:01.719 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 578], 00:14:01.719 | 99.99th=[ 578] 00:14:01.719 bw ( KiB/s): min= 8192, max= 8192, per=66.74%, avg=8192.00, stdev= 0.00, samples=1 00:14:01.719 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:01.719 lat (usec) : 250=35.10%, 500=64.49%, 750=0.31%, 1000=0.06% 00:14:01.719 lat (msec) : 2=0.03% 00:14:01.719 cpu : usr=3.00%, sys=7.00%, ctx=3186, majf=0, minf=2 00:14:01.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:01.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.719 issued rwts: total=1536,1649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:01.719 job1: (groupid=0, jobs=1): err= 0: pid=3391330: Wed Apr 24 16:10:02 2024 00:14:01.719 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:14:01.719 slat (nsec): min=9527, max=37528, avg=27232.68, stdev=9038.57 00:14:01.719 clat (usec): min=385, max=41457, avg=39161.67, stdev=8661.98 00:14:01.719 lat (usec): min=420, max=41477, avg=39188.91, stdev=8660.19 00:14:01.719 clat percentiles (usec): 00:14:01.719 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:01.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:01.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:01.719 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:01.719 | 99.99th=[41681] 00:14:01.719 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:14:01.719 slat (nsec): min=8096, max=63186, avg=19874.10, stdev=9946.97 00:14:01.719 clat (usec): min=180, max=665, avg=316.74, stdev=80.85 00:14:01.719 lat (usec): min=190, max=692, avg=336.61, stdev=83.81 00:14:01.719 clat percentiles (usec): 00:14:01.719 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 217], 20.00th=[ 258], 00:14:01.719 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 322], 00:14:01.719 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 429], 95.00th=[ 465], 00:14:01.719 | 99.00th=[ 562], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 668], 00:14:01.719 | 99.99th=[ 668] 00:14:01.719 bw ( KiB/s): min= 4096, max= 4096, per=33.37%, avg=4096.00, stdev= 0.00, samples=1 00:14:01.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:01.719 lat (usec) : 250=16.48%, 500=77.53%, 750=2.06% 00:14:01.719 lat (msec) : 50=3.93% 00:14:01.719 cpu : usr=1.25%, sys=0.77%, ctx=535, majf=0, minf=1 00:14:01.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:01.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.719 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:01.719 job2: (groupid=0, jobs=1): err= 0: pid=3391331: Wed Apr 24 16:10:02 2024 00:14:01.719 read: IOPS=394, BW=1578KiB/s (1616kB/s)(1608KiB/1019msec) 00:14:01.719 slat (nsec): min=6141, max=36480, avg=7987.84, stdev=2345.94 00:14:01.719 clat (usec): min=313, max=42015, avg=2179.35, stdev=8264.22 00:14:01.719 lat (usec): min=320, max=42029, avg=2187.34, stdev=8265.45 00:14:01.719 clat percentiles (usec): 00:14:01.719 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 375], 00:14:01.719 | 30.00th=[ 400], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:14:01.719 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 506], 95.00th=[ 537], 00:14:01.719 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:01.719 | 99.99th=[42206] 00:14:01.719 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:14:01.719 slat (usec): min=8, max=111, avg=11.35, stdev= 6.23 00:14:01.719 clat (usec): min=174, max=372, avg=254.64, stdev=40.76 00:14:01.719 lat (usec): min=214, max=403, avg=265.98, stdev=42.49 00:14:01.719 clat percentiles (usec): 00:14:01.720 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:14:01.720 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:14:01.720 | 70.00th=[ 262], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 338], 00:14:01.720 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 371], 99.95th=[ 371], 00:14:01.720 | 99.99th=[ 371] 00:14:01.720 bw ( KiB/s): min= 4096, max= 4096, per=33.37%, avg=4096.00, stdev= 0.00, samples=1 00:14:01.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:01.720 lat (usec) : 250=35.89%, 500=57.00%, 750=5.25% 00:14:01.720 lat (msec) : 50=1.86% 00:14:01.720 cpu : usr=0.69%, sys=1.08%, ctx=917, majf=0, minf=1 00:14:01.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:01.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.720 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:01.720 job3: (groupid=0, jobs=1): err= 0: pid=3391332: Wed Apr 24 16:10:02 2024 00:14:01.720 read: IOPS=176, BW=705KiB/s (722kB/s)(712KiB/1010msec) 00:14:01.720 slat (nsec): min=7850, max=34017, avg=16127.73, stdev=6388.90 00:14:01.720 clat (usec): min=340, max=42020, avg=4794.59, stdev=12596.97 00:14:01.720 lat (usec): min=350, max=42035, avg=4810.72, stdev=12600.42 00:14:01.720 clat percentiles (usec): 00:14:01.720 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 388], 00:14:01.720 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 474], 00:14:01.720 | 70.00th=[ 523], 80.00th=[ 586], 90.00th=[40633], 95.00th=[41157], 00:14:01.720 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:01.720 | 99.99th=[42206] 00:14:01.720 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:14:01.720 slat (nsec): min=6612, max=64061, avg=16940.73, stdev=8510.77 00:14:01.720 clat (usec): min=184, max=423, avg=275.72, stdev=51.85 00:14:01.720 lat (usec): min=199, max=441, avg=292.66, stdev=51.97 00:14:01.720 clat percentiles (usec): 00:14:01.720 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 239], 00:14:01.720 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:14:01.720 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 363], 95.00th=[ 383], 00:14:01.720 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 424], 99.95th=[ 424], 00:14:01.720 | 99.99th=[ 424] 00:14:01.720 bw ( KiB/s): min= 4096, max= 4096, per=33.37%, avg=4096.00, stdev= 0.00, samples=1 00:14:01.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:01.720 lat (usec) : 250=24.35%, 500=66.81%, 750=5.94%, 1000=0.14% 00:14:01.720 lat (msec) : 50=2.75% 00:14:01.720 cpu : usr=0.50%, sys=1.19%, ctx=691, majf=0, minf=1 00:14:01.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:01.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.720 issued rwts: total=178,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:01.720 00:14:01.720 Run status group 0 (all jobs): 00:14:01.720 READ: bw=8239KiB/s (8437kB/s), 84.8KiB/s-6138KiB/s (86.8kB/s-6285kB/s), io=8552KiB (8757kB), run=1001-1038msec 00:14:01.720 WRITE: bw=12.0MiB/s (12.6MB/s), 1973KiB/s-6589KiB/s (2020kB/s-6748kB/s), io=12.4MiB (13.0MB), run=1001-1038msec 00:14:01.720 00:14:01.720 Disk stats (read/write): 00:14:01.720 nvme0n1: ios=1250/1536, merge=0/0, ticks=1384/371, in_queue=1755, util=100.00% 00:14:01.720 nvme0n2: ios=40/512, merge=0/0, ticks=1530/156, in_queue=1686, util=91.06% 00:14:01.720 nvme0n3: ios=455/512, merge=0/0, ticks=1006/124, in_queue=1130, util=93.74% 00:14:01.720 nvme0n4: ios=197/512, merge=0/0, ticks=1603/133, in_queue=1736, util=98.11% 00:14:01.720 16:10:02 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:01.720 [global] 00:14:01.720 thread=1 00:14:01.720 invalidate=1 00:14:01.720 rw=write 00:14:01.720 time_based=1 00:14:01.720 runtime=1 00:14:01.720 ioengine=libaio 00:14:01.720 direct=1 00:14:01.720 bs=4096 00:14:01.720 iodepth=128 00:14:01.720 norandommap=0 00:14:01.720 numjobs=1 00:14:01.720 00:14:01.720 verify_dump=1 00:14:01.720 verify_backlog=512 00:14:01.720 verify_state_save=0 00:14:01.720 do_verify=1 00:14:01.720 verify=crc32c-intel 00:14:01.720 [job0] 00:14:01.720 filename=/dev/nvme0n1 00:14:01.720 [job1] 00:14:01.720 filename=/dev/nvme0n2 00:14:01.720 [job2] 00:14:01.720 filename=/dev/nvme0n3 00:14:01.720 [job3] 00:14:01.720 filename=/dev/nvme0n4 00:14:01.720 Could not set queue depth (nvme0n1) 00:14:01.720 Could not set queue depth (nvme0n2) 00:14:01.720 Could not set queue depth (nvme0n3) 00:14:01.720 Could not set queue depth (nvme0n4) 00:14:01.720 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.720 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.720 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.720 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.720 fio-3.35 00:14:01.720 Starting 4 threads 00:14:03.097 00:14:03.097 job0: (groupid=0, jobs=1): err= 0: pid=3391562: Wed Apr 24 16:10:04 2024 00:14:03.097 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:14:03.097 slat (usec): min=3, max=16554, avg=92.07, stdev=517.21 00:14:03.097 clat (usec): min=3241, max=48919, avg=12872.12, stdev=6482.02 00:14:03.097 lat (usec): min=3247, max=51592, avg=12964.19, stdev=6507.74 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:14:03.097 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:14:03.097 | 70.00th=[11863], 80.00th=[12387], 90.00th=[15008], 95.00th=[20579], 00:14:03.097 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47973], 99.95th=[48497], 00:14:03.097 | 99.99th=[49021] 00:14:03.097 write: IOPS=5125, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1002msec); 0 zone resets 00:14:03.097 slat (usec): min=4, max=14651, avg=89.44, stdev=507.99 00:14:03.097 clat (usec): min=954, max=44319, avg=11605.08, stdev=2940.34 00:14:03.097 lat (usec): min=960, max=44338, avg=11694.52, stdev=2971.61 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10814], 00:14:03.097 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:14:03.097 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12387], 95.00th=[14091], 00:14:03.097 | 99.00th=[26608], 99.50th=[38536], 99.90th=[40109], 99.95th=[44303], 00:14:03.097 | 99.99th=[44303] 00:14:03.097 bw ( KiB/s): min=19472, max=19472, per=31.96%, avg=19472.00, stdev= 0.00, samples=1 00:14:03.097 iops : min= 4868, max= 4868, avg=4868.00, stdev= 0.00, samples=1 00:14:03.097 lat (usec) : 1000=0.04% 00:14:03.097 lat (msec) : 4=0.27%, 10=9.78%, 20=86.39%, 50=3.52% 00:14:03.097 cpu : usr=7.49%, sys=13.59%, ctx=425, majf=0, minf=1 00:14:03.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:03.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.097 issued rwts: total=5120,5136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.097 job1: (groupid=0, jobs=1): err= 0: pid=3391563: Wed Apr 24 16:10:04 2024 00:14:03.097 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:14:03.097 slat (usec): min=3, max=32247, avg=158.97, stdev=1143.48 00:14:03.097 clat (usec): min=8476, max=63975, avg=20036.58, stdev=8916.93 00:14:03.097 lat (usec): min=8490, max=64015, avg=20195.55, stdev=9004.52 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[10028], 5.00th=[11600], 10.00th=[12911], 20.00th=[14222], 00:14:03.097 | 30.00th=[15795], 40.00th=[16712], 50.00th=[17433], 60.00th=[18744], 00:14:03.097 | 70.00th=[20055], 80.00th=[22938], 90.00th=[29230], 95.00th=[43779], 00:14:03.097 | 99.00th=[55837], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:14:03.097 | 99.99th=[64226] 00:14:03.097 write: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec); 0 zone resets 00:14:03.097 slat (usec): min=4, max=6629, avg=185.40, stdev=694.80 00:14:03.097 clat (usec): min=5575, max=62315, avg=25406.83, stdev=9563.41 00:14:03.097 lat (usec): min=7187, max=62325, avg=25592.24, stdev=9609.73 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 8979], 5.00th=[11731], 10.00th=[12256], 20.00th=[19792], 00:14:03.097 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:14:03.097 | 70.00th=[27395], 80.00th=[30802], 90.00th=[35914], 95.00th=[44827], 00:14:03.097 | 99.00th=[56886], 99.50th=[57410], 99.90th=[62129], 99.95th=[62129], 00:14:03.097 | 99.99th=[62129] 00:14:03.097 bw ( KiB/s): min=11192, max=11784, per=18.86%, avg=11488.00, stdev=418.61, samples=2 00:14:03.097 iops : min= 2798, max= 2946, avg=2872.00, stdev=104.65, samples=2 00:14:03.097 lat (msec) : 10=1.83%, 20=41.37%, 50=53.82%, 100=2.97% 00:14:03.097 cpu : usr=3.88%, sys=8.45%, ctx=399, majf=0, minf=1 00:14:03.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:03.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.097 issued rwts: total=2560,2999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.097 job2: (groupid=0, jobs=1): err= 0: pid=3391565: Wed Apr 24 16:10:04 2024 00:14:03.097 read: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(17.7MiB/1045msec) 00:14:03.097 slat (usec): min=3, max=26972, avg=112.34, stdev=725.94 00:14:03.097 clat (usec): min=6715, max=66138, avg=16037.51, stdev=9898.57 00:14:03.097 lat (usec): min=9308, max=66155, avg=16149.85, stdev=9924.83 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11731], 20.00th=[12780], 00:14:03.097 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:14:03.097 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15401], 95.00th=[44303], 00:14:03.097 | 99.00th=[55313], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:14:03.097 | 99.99th=[66323] 00:14:03.097 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:14:03.097 slat (usec): min=4, max=4513, avg=94.26, stdev=400.02 00:14:03.097 clat (usec): min=9452, max=16111, avg=12813.61, stdev=1244.36 00:14:03.097 lat (usec): min=9794, max=16120, avg=12907.87, stdev=1232.59 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11338], 20.00th=[11600], 00:14:03.097 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:14:03.097 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:14:03.097 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16057], 99.95th=[16057], 00:14:03.097 | 99.99th=[16057] 00:14:03.097 bw ( KiB/s): min=16384, max=20480, per=30.25%, avg=18432.00, stdev=2896.31, samples=2 00:14:03.097 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:03.097 lat (msec) : 10=0.74%, 20=95.07%, 50=2.14%, 100=2.05% 00:14:03.097 cpu : usr=7.38%, sys=10.25%, ctx=521, majf=0, minf=1 00:14:03.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:03.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.097 issued rwts: total=4522,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.097 job3: (groupid=0, jobs=1): err= 0: pid=3391566: Wed Apr 24 16:10:04 2024 00:14:03.097 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:14:03.097 slat (usec): min=2, max=12845, avg=170.97, stdev=1003.41 00:14:03.097 clat (usec): min=4623, max=63387, avg=23380.56, stdev=9574.63 00:14:03.097 lat (usec): min=4632, max=63391, avg=23551.53, stdev=9614.11 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 5145], 5.00th=[13173], 10.00th=[13435], 20.00th=[15926], 00:14:03.097 | 30.00th=[16581], 40.00th=[18744], 50.00th=[22414], 60.00th=[24249], 00:14:03.097 | 70.00th=[27395], 80.00th=[31065], 90.00th=[36439], 95.00th=[40633], 00:14:03.097 | 99.00th=[61080], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:14:03.097 | 99.99th=[63177] 00:14:03.097 write: IOPS=3158, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1005msec); 0 zone resets 00:14:03.097 slat (usec): min=3, max=11644, avg=122.05, stdev=661.72 00:14:03.097 clat (usec): min=874, max=50304, avg=17582.88, stdev=6557.70 00:14:03.097 lat (usec): min=894, max=50323, avg=17704.94, stdev=6583.02 00:14:03.097 clat percentiles (usec): 00:14:03.097 | 1.00th=[ 2008], 5.00th=[ 8455], 10.00th=[10552], 20.00th=[12780], 00:14:03.097 | 30.00th=[13698], 40.00th=[15795], 50.00th=[17695], 60.00th=[17957], 00:14:03.097 | 70.00th=[20579], 80.00th=[22938], 90.00th=[23462], 95.00th=[27132], 00:14:03.097 | 99.00th=[41681], 99.50th=[44303], 99.90th=[45876], 99.95th=[49021], 00:14:03.097 | 99.99th=[50070] 00:14:03.097 bw ( KiB/s): min= 8192, max=16384, per=20.17%, avg=12288.00, stdev=5792.62, samples=2 00:14:03.097 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:14:03.097 lat (usec) : 1000=0.05% 00:14:03.097 lat (msec) : 2=0.29%, 4=0.88%, 10=4.03%, 20=51.59%, 50=42.62% 00:14:03.097 lat (msec) : 100=0.54% 00:14:03.097 cpu : usr=3.49%, sys=7.97%, ctx=296, majf=0, minf=1 00:14:03.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:03.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.097 issued rwts: total=3072,3174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.097 00:14:03.097 Run status group 0 (all jobs): 00:14:03.097 READ: bw=57.1MiB/s (59.9MB/s), 9.93MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=59.7MiB (62.6MB), run=1002-1045msec 00:14:03.097 WRITE: bw=59.5MiB/s (62.4MB/s), 11.6MiB/s-20.0MiB/s (12.2MB/s-21.0MB/s), io=62.2MiB (65.2MB), run=1002-1045msec 00:14:03.097 00:14:03.097 Disk stats (read/write): 00:14:03.097 nvme0n1: ios=4118/4424, merge=0/0, ticks=17694/15746, in_queue=33440, util=84.97% 00:14:03.097 nvme0n2: ios=2098/2535, merge=0/0, ticks=19933/30074, in_queue=50007, util=91.06% 00:14:03.097 nvme0n3: ios=3609/4060, merge=0/0, ticks=14364/11787, in_queue=26151, util=93.00% 00:14:03.097 nvme0n4: ios=2617/3023, merge=0/0, ticks=24573/23006, in_queue=47579, util=95.79% 00:14:03.097 16:10:04 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:03.097 [global] 00:14:03.097 thread=1 00:14:03.097 invalidate=1 00:14:03.097 rw=randwrite 00:14:03.097 time_based=1 00:14:03.097 runtime=1 00:14:03.097 ioengine=libaio 00:14:03.097 direct=1 00:14:03.097 bs=4096 00:14:03.097 iodepth=128 00:14:03.097 norandommap=0 00:14:03.097 numjobs=1 00:14:03.097 00:14:03.097 verify_dump=1 00:14:03.097 verify_backlog=512 00:14:03.097 verify_state_save=0 00:14:03.097 do_verify=1 00:14:03.097 verify=crc32c-intel 00:14:03.097 [job0] 00:14:03.097 filename=/dev/nvme0n1 00:14:03.097 [job1] 00:14:03.097 filename=/dev/nvme0n2 00:14:03.097 [job2] 00:14:03.097 filename=/dev/nvme0n3 00:14:03.097 [job3] 00:14:03.097 filename=/dev/nvme0n4 00:14:03.097 Could not set queue depth (nvme0n1) 00:14:03.097 Could not set queue depth (nvme0n2) 00:14:03.097 Could not set queue depth (nvme0n3) 00:14:03.097 Could not set queue depth (nvme0n4) 00:14:03.355 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.355 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.355 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.355 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.355 fio-3.35 00:14:03.355 Starting 4 threads 00:14:04.733 00:14:04.733 job0: (groupid=0, jobs=1): err= 0: pid=3391908: Wed Apr 24 16:10:05 2024 00:14:04.733 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:14:04.733 slat (usec): min=2, max=20373, avg=128.66, stdev=898.18 00:14:04.733 clat (usec): min=4754, max=42733, avg=15935.97, stdev=6535.85 00:14:04.733 lat (usec): min=4762, max=44464, avg=16064.63, stdev=6579.74 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11600], 00:14:04.733 | 30.00th=[12387], 40.00th=[13173], 50.00th=[14091], 60.00th=[15008], 00:14:04.733 | 70.00th=[16057], 80.00th=[18220], 90.00th=[26870], 95.00th=[31327], 00:14:04.733 | 99.00th=[34866], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:04.733 | 99.99th=[42730] 00:14:04.733 write: IOPS=3954, BW=15.4MiB/s (16.2MB/s)(15.6MiB/1009msec); 0 zone resets 00:14:04.733 slat (usec): min=3, max=14748, avg=120.10, stdev=798.65 00:14:04.733 clat (usec): min=3422, max=68645, avg=17677.36, stdev=10621.24 00:14:04.733 lat (usec): min=3434, max=68653, avg=17797.47, stdev=10679.25 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 5407], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[11076], 00:14:04.733 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13698], 60.00th=[15533], 00:14:04.733 | 70.00th=[18482], 80.00th=[22676], 90.00th=[33424], 95.00th=[42730], 00:14:04.733 | 99.00th=[51643], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:14:04.733 | 99.99th=[68682] 00:14:04.733 bw ( KiB/s): min=14520, max=16384, per=24.80%, avg=15452.00, stdev=1318.05, samples=2 00:14:04.733 iops : min= 3630, max= 4096, avg=3863.00, stdev=329.51, samples=2 00:14:04.733 lat (msec) : 4=0.09%, 10=12.01%, 20=66.89%, 50=20.31%, 100=0.70% 00:14:04.733 cpu : usr=4.07%, sys=5.65%, ctx=330, majf=0, minf=1 00:14:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:04.733 issued rwts: total=3584,3990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:04.733 job1: (groupid=0, jobs=1): err= 0: pid=3391909: Wed Apr 24 16:10:05 2024 00:14:04.733 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1004msec) 00:14:04.733 slat (usec): min=2, max=11298, avg=103.63, stdev=561.36 00:14:04.733 clat (usec): min=2595, max=54658, avg=13456.44, stdev=4659.79 00:14:04.733 lat (usec): min=6728, max=54666, avg=13560.07, stdev=4651.84 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10945], 00:14:04.733 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:14:04.733 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16712], 95.00th=[23987], 00:14:04.733 | 99.00th=[28705], 99.50th=[31065], 99.90th=[54789], 99.95th=[54789], 00:14:04.733 | 99.99th=[54789] 00:14:04.733 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:14:04.733 slat (usec): min=4, max=11124, avg=114.52, stdev=635.47 00:14:04.733 clat (msec): min=6, max=117, avg=15.33, stdev=14.75 00:14:04.733 lat (msec): min=6, max=117, avg=15.45, stdev=14.84 00:14:04.733 clat percentiles (msec): 00:14:04.733 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:14:04.733 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:14:04.733 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 19], 95.00th=[ 29], 00:14:04.733 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 118], 00:14:04.733 | 99.99th=[ 118] 00:14:04.733 bw ( KiB/s): min=16040, max=20521, per=29.34%, avg=18280.50, stdev=3168.55, samples=2 00:14:04.733 iops : min= 4010, max= 5130, avg=4570.00, stdev=791.96, samples=2 00:14:04.733 lat (msec) : 4=0.01%, 10=9.78%, 20=83.05%, 50=5.18%, 100=1.37% 00:14:04.733 lat (msec) : 250=0.61% 00:14:04.733 cpu : usr=4.68%, sys=8.07%, ctx=516, majf=0, minf=1 00:14:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:04.733 issued rwts: total=4181,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:04.733 job2: (groupid=0, jobs=1): err= 0: pid=3391910: Wed Apr 24 16:10:05 2024 00:14:04.733 read: IOPS=3704, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1002msec) 00:14:04.733 slat (usec): min=2, max=14643, avg=123.06, stdev=791.25 00:14:04.733 clat (usec): min=681, max=57540, avg=15951.06, stdev=6244.12 00:14:04.733 lat (usec): min=3440, max=63119, avg=16074.13, stdev=6283.15 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 5669], 5.00th=[10028], 10.00th=[11207], 20.00th=[12387], 00:14:04.733 | 30.00th=[13042], 40.00th=[13304], 50.00th=[14222], 60.00th=[14746], 00:14:04.733 | 70.00th=[16319], 80.00th=[19792], 90.00th=[22676], 95.00th=[26346], 00:14:04.733 | 99.00th=[36963], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:14:04.733 | 99.99th=[57410] 00:14:04.733 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:14:04.733 slat (usec): min=3, max=13869, avg=122.78, stdev=841.65 00:14:04.733 clat (usec): min=2773, max=51177, avg=16569.03, stdev=7105.63 00:14:04.733 lat (usec): min=2778, max=51186, avg=16691.81, stdev=7154.20 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 4555], 5.00th=[ 8094], 10.00th=[10290], 20.00th=[11731], 00:14:04.733 | 30.00th=[12911], 40.00th=[13960], 50.00th=[15401], 60.00th=[16057], 00:14:04.733 | 70.00th=[17695], 80.00th=[20055], 90.00th=[24511], 95.00th=[28181], 00:14:04.733 | 99.00th=[45876], 99.50th=[47973], 99.90th=[48497], 99.95th=[51119], 00:14:04.733 | 99.99th=[51119] 00:14:04.733 bw ( KiB/s): min=16384, max=16384, per=26.30%, avg=16384.00, stdev= 0.00, samples=2 00:14:04.733 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:04.733 lat (usec) : 750=0.01% 00:14:04.733 lat (msec) : 4=0.51%, 10=6.15%, 20=73.81%, 50=19.22%, 100=0.29% 00:14:04.733 cpu : usr=3.70%, sys=6.59%, ctx=319, majf=0, minf=1 00:14:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:04.733 issued rwts: total=3712,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:04.733 job3: (groupid=0, jobs=1): err= 0: pid=3391911: Wed Apr 24 16:10:05 2024 00:14:04.733 read: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(13.4MiB/1045msec) 00:14:04.733 slat (usec): min=2, max=18711, avg=156.38, stdev=1095.50 00:14:04.733 clat (usec): min=3362, max=72477, avg=21423.61, stdev=13056.19 00:14:04.733 lat (usec): min=4524, max=78093, avg=21579.99, stdev=13117.00 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[11600], 20.00th=[12518], 00:14:04.733 | 30.00th=[13304], 40.00th=[14746], 50.00th=[16450], 60.00th=[18744], 00:14:04.733 | 70.00th=[21890], 80.00th=[27919], 90.00th=[41157], 95.00th=[54264], 00:14:04.733 | 99.00th=[63177], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:14:04.733 | 99.99th=[72877] 00:14:04.733 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:14:04.733 slat (usec): min=3, max=23148, avg=117.28, stdev=763.74 00:14:04.733 clat (usec): min=1655, max=41083, avg=15839.00, stdev=6437.06 00:14:04.733 lat (usec): min=1661, max=41108, avg=15956.28, stdev=6470.64 00:14:04.733 clat percentiles (usec): 00:14:04.733 | 1.00th=[ 3359], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[12518], 00:14:04.733 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14353], 60.00th=[15270], 00:14:04.733 | 70.00th=[15926], 80.00th=[18220], 90.00th=[21890], 95.00th=[30540], 00:14:04.733 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:14:04.733 | 99.99th=[41157] 00:14:04.733 bw ( KiB/s): min=12680, max=15992, per=23.01%, avg=14336.00, stdev=2341.94, samples=2 00:14:04.733 iops : min= 3170, max= 3998, avg=3584.00, stdev=585.48, samples=2 00:14:04.733 lat (msec) : 2=0.16%, 4=0.60%, 10=5.34%, 20=69.00%, 50=21.16% 00:14:04.733 lat (msec) : 100=3.73% 00:14:04.733 cpu : usr=3.83%, sys=6.70%, ctx=386, majf=0, minf=1 00:14:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:04.733 issued rwts: total=3433,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:04.733 00:14:04.733 Run status group 0 (all jobs): 00:14:04.733 READ: bw=55.7MiB/s (58.4MB/s), 12.8MiB/s-16.3MiB/s (13.5MB/s-17.1MB/s), io=58.2MiB (61.1MB), run=1002-1045msec 00:14:04.733 WRITE: bw=60.8MiB/s (63.8MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=63.6MiB (66.7MB), run=1002-1045msec 00:14:04.733 00:14:04.733 Disk stats (read/write): 00:14:04.733 nvme0n1: ios=3122/3407, merge=0/0, ticks=28562/29281, in_queue=57843, util=84.97% 00:14:04.733 nvme0n2: ios=3606/3776, merge=0/0, ticks=14548/16296, in_queue=30844, util=94.00% 00:14:04.733 nvme0n3: ios=3189/3584, merge=0/0, ticks=23774/31544, in_queue=55318, util=97.18% 00:14:04.733 nvme0n4: ios=2791/3072, merge=0/0, ticks=27459/23787, in_queue=51246, util=98.00% 00:14:04.733 16:10:05 -- target/fio.sh@55 -- # sync 00:14:04.733 16:10:05 -- target/fio.sh@59 -- # fio_pid=3392051 00:14:04.733 16:10:05 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:04.733 16:10:05 -- target/fio.sh@61 -- # sleep 3 00:14:04.733 [global] 00:14:04.733 thread=1 00:14:04.733 invalidate=1 00:14:04.733 rw=read 00:14:04.733 time_based=1 00:14:04.733 runtime=10 00:14:04.733 ioengine=libaio 00:14:04.733 direct=1 00:14:04.733 bs=4096 00:14:04.733 iodepth=1 00:14:04.733 norandommap=1 00:14:04.733 numjobs=1 00:14:04.733 00:14:04.733 [job0] 00:14:04.733 filename=/dev/nvme0n1 00:14:04.733 [job1] 00:14:04.733 filename=/dev/nvme0n2 00:14:04.733 [job2] 00:14:04.733 filename=/dev/nvme0n3 00:14:04.733 [job3] 00:14:04.733 filename=/dev/nvme0n4 00:14:04.734 Could not set queue depth (nvme0n1) 00:14:04.734 Could not set queue depth (nvme0n2) 00:14:04.734 Could not set queue depth (nvme0n3) 00:14:04.734 Could not set queue depth (nvme0n4) 00:14:04.734 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:04.734 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:04.734 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:04.734 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:04.734 fio-3.35 00:14:04.734 Starting 4 threads 00:14:08.025 16:10:08 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:08.025 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=376832, buflen=4096 00:14:08.025 fio: pid=3392146, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:08.025 16:10:08 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:08.025 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24530944, buflen=4096 00:14:08.025 fio: pid=3392145, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:08.025 16:10:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:08.025 16:10:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:08.282 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=339968, buflen=4096 00:14:08.282 fio: pid=3392143, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:08.282 16:10:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:08.282 16:10:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:08.541 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=35123200, buflen=4096 00:14:08.541 fio: pid=3392144, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:08.541 16:10:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:08.541 16:10:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:08.541 00:14:08.541 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3392143: Wed Apr 24 16:10:09 2024 00:14:08.541 read: IOPS=25, BW=98.8KiB/s (101kB/s)(332KiB/3360msec) 00:14:08.541 slat (usec): min=11, max=8774, avg=127.89, stdev=954.79 00:14:08.541 clat (usec): min=564, max=42946, avg=40288.32, stdev=6073.97 00:14:08.541 lat (usec): min=579, max=50941, avg=40417.59, stdev=6182.85 00:14:08.541 clat percentiles (usec): 00:14:08.541 | 1.00th=[ 562], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:08.541 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:08.541 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:14:08.541 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:08.541 | 99.99th=[42730] 00:14:08.541 bw ( KiB/s): min= 96, max= 104, per=0.60%, avg=97.33, stdev= 3.27, samples=6 00:14:08.541 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:14:08.541 lat (usec) : 750=1.19% 00:14:08.541 lat (msec) : 4=1.19%, 50=96.43% 00:14:08.541 cpu : usr=0.12%, sys=0.00%, ctx=87, majf=0, minf=1 00:14:08.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.541 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3392144: Wed Apr 24 16:10:09 2024 00:14:08.541 read: IOPS=2358, BW=9433KiB/s (9660kB/s)(33.5MiB/3636msec) 00:14:08.541 slat (usec): min=4, max=15870, avg=22.29, stdev=275.93 00:14:08.541 clat (usec): min=233, max=42131, avg=397.66, stdev=1250.59 00:14:08.541 lat (usec): min=239, max=42144, avg=419.95, stdev=1281.14 00:14:08.541 clat percentiles (usec): 00:14:08.541 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:14:08.541 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 375], 00:14:08.541 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 433], 95.00th=[ 457], 00:14:08.541 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 2999], 99.95th=[41157], 00:14:08.541 | 99.99th=[42206] 00:14:08.541 bw ( KiB/s): min= 6065, max=12424, per=59.60%, avg=9664.14, stdev=2136.99, samples=7 00:14:08.541 iops : min= 1516, max= 3106, avg=2416.00, stdev=534.32, samples=7 00:14:08.541 lat (usec) : 250=0.52%, 500=98.27%, 750=0.96%, 1000=0.09% 00:14:08.541 lat (msec) : 2=0.02%, 4=0.02%, 50=0.09% 00:14:08.541 cpu : usr=1.82%, sys=4.76%, ctx=8581, majf=0, minf=1 00:14:08.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 issued rwts: total=8576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.541 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3392145: Wed Apr 24 16:10:09 2024 00:14:08.541 read: IOPS=1917, BW=7668KiB/s (7852kB/s)(23.4MiB/3124msec) 00:14:08.541 slat (nsec): min=5566, max=71432, avg=16192.49, stdev=8326.24 00:14:08.541 clat (usec): min=306, max=42432, avg=501.31, stdev=1181.51 00:14:08.541 lat (usec): min=318, max=42440, avg=517.50, stdev=1181.51 00:14:08.541 clat percentiles (usec): 00:14:08.541 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 392], 20.00th=[ 420], 00:14:08.541 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 469], 00:14:08.541 | 70.00th=[ 486], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 578], 00:14:08.541 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 1270], 99.95th=[40633], 00:14:08.541 | 99.99th=[42206] 00:14:08.541 bw ( KiB/s): min= 7112, max= 8816, per=49.22%, avg=7980.00, stdev=641.33, samples=6 00:14:08.541 iops : min= 1778, max= 2204, avg=1995.00, stdev=160.33, samples=6 00:14:08.541 lat (usec) : 500=74.96%, 750=24.87%, 1000=0.02% 00:14:08.541 lat (msec) : 2=0.05%, 50=0.08% 00:14:08.541 cpu : usr=1.70%, sys=4.90%, ctx=5990, majf=0, minf=1 00:14:08.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 issued rwts: total=5990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.541 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3392146: Wed Apr 24 16:10:09 2024 00:14:08.541 read: IOPS=32, BW=129KiB/s (132kB/s)(368KiB/2862msec) 00:14:08.541 slat (nsec): min=7810, max=55681, avg=23909.23, stdev=10572.98 00:14:08.541 clat (usec): min=333, max=45037, avg=31065.80, stdev=17744.91 00:14:08.541 lat (usec): min=350, max=45055, avg=31089.67, stdev=17743.65 00:14:08.541 clat percentiles (usec): 00:14:08.541 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 478], 20.00th=[ 603], 00:14:08.541 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:08.541 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:14:08.541 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:08.541 | 99.99th=[44827] 00:14:08.541 bw ( KiB/s): min= 96, max= 168, per=0.78%, avg=126.40, stdev=34.59, samples=5 00:14:08.541 iops : min= 24, max= 42, avg=31.60, stdev= 8.65, samples=5 00:14:08.541 lat (usec) : 500=12.90%, 750=10.75%, 1000=1.08% 00:14:08.541 lat (msec) : 50=74.19% 00:14:08.541 cpu : usr=0.00%, sys=0.14%, ctx=93, majf=0, minf=1 00:14:08.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.541 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.541 00:14:08.541 Run status group 0 (all jobs): 00:14:08.541 READ: bw=15.8MiB/s (16.6MB/s), 98.8KiB/s-9433KiB/s (101kB/s-9660kB/s), io=57.6MiB (60.4MB), run=2862-3636msec 00:14:08.541 00:14:08.541 Disk stats (read/write): 00:14:08.541 nvme0n1: ios=122/0, merge=0/0, ticks=3533/0, in_queue=3533, util=99.91% 00:14:08.541 nvme0n2: ios=8573/0, merge=0/0, ticks=3052/0, in_queue=3052, util=94.74% 00:14:08.541 nvme0n3: ios=5987/0, merge=0/0, ticks=2744/0, in_queue=2744, util=96.67% 00:14:08.541 nvme0n4: ios=91/0, merge=0/0, ticks=2818/0, in_queue=2818, util=96.72% 00:14:08.799 16:10:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:08.799 16:10:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:09.057 16:10:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:09.057 16:10:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:09.315 16:10:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:09.315 16:10:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:09.573 16:10:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:09.573 16:10:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:09.831 16:10:11 -- target/fio.sh@69 -- # fio_status=0 00:14:09.831 16:10:11 -- target/fio.sh@70 -- # wait 3392051 00:14:09.831 16:10:11 -- target/fio.sh@70 -- # fio_status=4 00:14:09.831 16:10:11 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.088 16:10:11 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.088 16:10:11 -- common/autotest_common.sh@1205 -- # local i=0 00:14:10.088 16:10:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:10.088 16:10:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.088 16:10:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:10.088 16:10:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.089 16:10:11 -- common/autotest_common.sh@1217 -- # return 0 00:14:10.089 16:10:11 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:10.089 16:10:11 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:10.089 nvmf hotplug test: fio failed as expected 00:14:10.089 16:10:11 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.347 16:10:11 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:10.347 16:10:11 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:10.347 16:10:11 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:10.347 16:10:11 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:10.347 16:10:11 -- target/fio.sh@91 -- # nvmftestfini 00:14:10.347 16:10:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:10.347 16:10:11 -- nvmf/common.sh@117 -- # sync 00:14:10.347 16:10:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.347 16:10:11 -- nvmf/common.sh@120 -- # set +e 00:14:10.347 16:10:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.347 16:10:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.347 rmmod nvme_tcp 00:14:10.347 rmmod nvme_fabrics 00:14:10.347 rmmod nvme_keyring 00:14:10.347 16:10:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.347 16:10:11 -- nvmf/common.sh@124 -- # set -e 00:14:10.347 16:10:11 -- nvmf/common.sh@125 -- # return 0 00:14:10.347 16:10:11 -- nvmf/common.sh@478 -- # '[' -n 3390026 ']' 00:14:10.347 16:10:11 -- nvmf/common.sh@479 -- # killprocess 3390026 00:14:10.347 16:10:11 -- common/autotest_common.sh@936 -- # '[' -z 3390026 ']' 00:14:10.347 16:10:11 -- common/autotest_common.sh@940 -- # kill -0 3390026 00:14:10.347 16:10:11 -- common/autotest_common.sh@941 -- # uname 00:14:10.347 16:10:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.347 16:10:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3390026 00:14:10.347 16:10:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:10.347 16:10:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:10.347 16:10:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3390026' 00:14:10.347 killing process with pid 3390026 00:14:10.347 16:10:11 -- common/autotest_common.sh@955 -- # kill 3390026 00:14:10.347 16:10:11 -- common/autotest_common.sh@960 -- # wait 3390026 00:14:10.605 16:10:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:10.605 16:10:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:10.605 16:10:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:10.605 16:10:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.605 16:10:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.605 16:10:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.605 16:10:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.605 16:10:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.140 16:10:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.140 00:14:13.140 real 0m23.409s 00:14:13.140 user 1m19.853s 00:14:13.140 sys 0m7.639s 00:14:13.140 16:10:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:13.140 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:13.140 ************************************ 00:14:13.140 END TEST nvmf_fio_target 00:14:13.140 ************************************ 00:14:13.140 16:10:13 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:13.140 16:10:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.140 16:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.140 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:13.140 ************************************ 00:14:13.140 START TEST nvmf_bdevio 00:14:13.140 ************************************ 00:14:13.141 16:10:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:13.141 * Looking for test storage... 00:14:13.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.141 16:10:14 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.141 16:10:14 -- nvmf/common.sh@7 -- # uname -s 00:14:13.141 16:10:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.141 16:10:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.141 16:10:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.141 16:10:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.141 16:10:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.141 16:10:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.141 16:10:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.141 16:10:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.141 16:10:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.141 16:10:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.141 16:10:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.141 16:10:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.141 16:10:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.141 16:10:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.141 16:10:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.141 16:10:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.141 16:10:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.141 16:10:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.141 16:10:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.141 16:10:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.141 16:10:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.141 16:10:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.141 16:10:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.141 16:10:14 -- paths/export.sh@5 -- # export PATH 00:14:13.141 16:10:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.141 16:10:14 -- nvmf/common.sh@47 -- # : 0 00:14:13.141 16:10:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.141 16:10:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.141 16:10:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.141 16:10:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.141 16:10:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.141 16:10:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.141 16:10:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.141 16:10:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.141 16:10:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.141 16:10:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.141 16:10:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:13.141 16:10:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:13.141 16:10:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.141 16:10:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:13.141 16:10:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:13.141 16:10:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:13.141 16:10:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.141 16:10:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.141 16:10:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.141 16:10:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:13.141 16:10:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:13.141 16:10:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.141 16:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:15.043 16:10:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:15.043 16:10:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.043 16:10:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.043 16:10:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.043 16:10:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.043 16:10:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.043 16:10:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.043 16:10:16 -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.043 16:10:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.043 16:10:16 -- nvmf/common.sh@296 -- # e810=() 00:14:15.043 16:10:16 -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.043 16:10:16 -- nvmf/common.sh@297 -- # x722=() 00:14:15.043 16:10:16 -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.043 16:10:16 -- nvmf/common.sh@298 -- # mlx=() 00:14:15.043 16:10:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.043 16:10:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.043 16:10:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.043 16:10:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.043 16:10:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.043 16:10:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.043 16:10:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:15.043 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:15.043 16:10:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.043 16:10:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:15.043 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:15.043 16:10:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.043 16:10:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.043 16:10:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.043 16:10:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.043 16:10:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:15.043 16:10:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.043 16:10:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:15.043 Found net devices under 0000:09:00.0: cvl_0_0 00:14:15.043 16:10:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.043 16:10:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.043 16:10:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.043 16:10:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:15.043 16:10:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.043 16:10:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:15.044 Found net devices under 0000:09:00.1: cvl_0_1 00:14:15.044 16:10:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.044 16:10:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:15.044 16:10:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:15.044 16:10:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:15.044 16:10:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:15.044 16:10:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:15.044 16:10:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.044 16:10:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.044 16:10:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.044 16:10:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.044 16:10:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.044 16:10:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.044 16:10:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.044 16:10:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.044 16:10:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.044 16:10:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.044 16:10:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.044 16:10:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.044 16:10:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.044 16:10:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.044 16:10:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.044 16:10:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.044 16:10:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.044 16:10:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.044 16:10:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.044 16:10:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:14:15.044 00:14:15.044 --- 10.0.0.2 ping statistics --- 00:14:15.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.044 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:15.044 16:10:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:14:15.044 00:14:15.044 --- 10.0.0.1 ping statistics --- 00:14:15.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.044 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:15.044 16:10:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.044 16:10:16 -- nvmf/common.sh@411 -- # return 0 00:14:15.044 16:10:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:15.044 16:10:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.044 16:10:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:15.044 16:10:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:15.044 16:10:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.044 16:10:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:15.044 16:10:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:15.044 16:10:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:15.044 16:10:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:15.044 16:10:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:15.044 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.044 16:10:16 -- nvmf/common.sh@470 -- # nvmfpid=3394774 00:14:15.044 16:10:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:15.044 16:10:16 -- nvmf/common.sh@471 -- # waitforlisten 3394774 00:14:15.044 16:10:16 -- common/autotest_common.sh@817 -- # '[' -z 3394774 ']' 00:14:15.044 16:10:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.044 16:10:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:15.044 16:10:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.044 16:10:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:15.044 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.044 [2024-04-24 16:10:16.263385] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:15.044 [2024-04-24 16:10:16.263456] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.044 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.044 [2024-04-24 16:10:16.326136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.301 [2024-04-24 16:10:16.429425] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.301 [2024-04-24 16:10:16.429476] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.301 [2024-04-24 16:10:16.429489] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.301 [2024-04-24 16:10:16.429501] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.301 [2024-04-24 16:10:16.429511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.301 [2024-04-24 16:10:16.429608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:15.301 [2024-04-24 16:10:16.429668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:15.301 [2024-04-24 16:10:16.429733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:15.301 [2024-04-24 16:10:16.429736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.301 16:10:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:15.301 16:10:16 -- common/autotest_common.sh@850 -- # return 0 00:14:15.302 16:10:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:15.302 16:10:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:15.302 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.302 16:10:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.302 16:10:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.302 16:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.302 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.302 [2024-04-24 16:10:16.585546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.559 16:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.559 16:10:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.559 16:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.559 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.559 Malloc0 00:14:15.559 16:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.559 16:10:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:15.559 16:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.559 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.559 16:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.559 16:10:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.559 16:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.559 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.559 16:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.559 16:10:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.559 16:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.559 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:15.559 [2024-04-24 16:10:16.638713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.559 16:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.559 16:10:16 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:15.559 16:10:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:15.559 16:10:16 -- nvmf/common.sh@521 -- # config=() 00:14:15.559 16:10:16 -- nvmf/common.sh@521 -- # local subsystem config 00:14:15.559 16:10:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:15.559 16:10:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:15.559 { 00:14:15.559 "params": { 00:14:15.559 "name": "Nvme$subsystem", 00:14:15.559 "trtype": "$TEST_TRANSPORT", 00:14:15.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.559 "adrfam": "ipv4", 00:14:15.559 "trsvcid": "$NVMF_PORT", 00:14:15.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.559 "hdgst": ${hdgst:-false}, 00:14:15.559 "ddgst": ${ddgst:-false} 00:14:15.559 }, 00:14:15.559 "method": "bdev_nvme_attach_controller" 00:14:15.559 } 00:14:15.559 EOF 00:14:15.559 )") 00:14:15.559 16:10:16 -- nvmf/common.sh@543 -- # cat 00:14:15.559 16:10:16 -- nvmf/common.sh@545 -- # jq . 00:14:15.559 16:10:16 -- nvmf/common.sh@546 -- # IFS=, 00:14:15.559 16:10:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:15.559 "params": { 00:14:15.559 "name": "Nvme1", 00:14:15.559 "trtype": "tcp", 00:14:15.559 "traddr": "10.0.0.2", 00:14:15.559 "adrfam": "ipv4", 00:14:15.559 "trsvcid": "4420", 00:14:15.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.559 "hdgst": false, 00:14:15.559 "ddgst": false 00:14:15.559 }, 00:14:15.559 "method": "bdev_nvme_attach_controller" 00:14:15.559 }' 00:14:15.559 [2024-04-24 16:10:16.685465] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:15.559 [2024-04-24 16:10:16.685572] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394798 ] 00:14:15.559 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.559 [2024-04-24 16:10:16.749384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.817 [2024-04-24 16:10:16.858932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.817 [2024-04-24 16:10:16.858980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.817 [2024-04-24 16:10:16.858984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.817 I/O targets: 00:14:15.817 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:15.817 00:14:15.817 00:14:15.817 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.817 http://cunit.sourceforge.net/ 00:14:15.817 00:14:15.817 00:14:15.817 Suite: bdevio tests on: Nvme1n1 00:14:15.817 Test: blockdev write read block ...passed 00:14:16.074 Test: blockdev write zeroes read block ...passed 00:14:16.074 Test: blockdev write zeroes read no split ...passed 00:14:16.074 Test: blockdev write zeroes read split ...passed 00:14:16.074 Test: blockdev write zeroes read split partial ...passed 00:14:16.074 Test: blockdev reset ...[2024-04-24 16:10:17.246653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:16.074 [2024-04-24 16:10:17.246769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x907f60 (9): Bad file descriptor 00:14:16.074 [2024-04-24 16:10:17.300819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:16.074 passed 00:14:16.074 Test: blockdev write read 8 blocks ...passed 00:14:16.074 Test: blockdev write read size > 128k ...passed 00:14:16.074 Test: blockdev write read invalid size ...passed 00:14:16.331 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.331 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.331 Test: blockdev write read max offset ...passed 00:14:16.331 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.331 Test: blockdev writev readv 8 blocks ...passed 00:14:16.331 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.331 Test: blockdev writev readv block ...passed 00:14:16.331 Test: blockdev writev readv size > 128k ...passed 00:14:16.331 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.331 Test: blockdev comparev and writev ...[2024-04-24 16:10:17.519809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.519869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.519898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.520285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.520309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.520331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.520347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.520758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.520786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.520809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.520826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.521222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.331 [2024-04-24 16:10:17.521248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:16.331 [2024-04-24 16:10:17.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:16.332 [2024-04-24 16:10:17.521288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:16.332 passed 00:14:16.332 Test: blockdev nvme passthru rw ...passed 00:14:16.332 Test: blockdev nvme passthru vendor specific ...[2024-04-24 16:10:17.604106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.332 [2024-04-24 16:10:17.604133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:16.332 [2024-04-24 16:10:17.604320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.332 [2024-04-24 16:10:17.604345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:16.332 [2024-04-24 16:10:17.604531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.332 [2024-04-24 16:10:17.604555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:16.332 [2024-04-24 16:10:17.604751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.332 [2024-04-24 16:10:17.604786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:16.332 passed 00:14:16.589 Test: blockdev nvme admin passthru ...passed 00:14:16.589 Test: blockdev copy ...passed 00:14:16.589 00:14:16.589 Run Summary: Type Total Ran Passed Failed Inactive 00:14:16.589 suites 1 1 n/a 0 0 00:14:16.589 tests 23 23 23 0 0 00:14:16.589 asserts 152 152 152 0 n/a 00:14:16.589 00:14:16.589 Elapsed time = 1.239 seconds 00:14:16.846 16:10:17 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.846 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.846 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:14:16.846 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.846 16:10:17 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:16.846 16:10:17 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:16.846 16:10:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:16.846 16:10:17 -- nvmf/common.sh@117 -- # sync 00:14:16.846 16:10:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.846 16:10:17 -- nvmf/common.sh@120 -- # set +e 00:14:16.846 16:10:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.846 16:10:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.846 rmmod nvme_tcp 00:14:16.846 rmmod nvme_fabrics 00:14:16.846 rmmod nvme_keyring 00:14:16.846 16:10:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.846 16:10:17 -- nvmf/common.sh@124 -- # set -e 00:14:16.846 16:10:17 -- nvmf/common.sh@125 -- # return 0 00:14:16.846 16:10:17 -- nvmf/common.sh@478 -- # '[' -n 3394774 ']' 00:14:16.846 16:10:17 -- nvmf/common.sh@479 -- # killprocess 3394774 00:14:16.846 16:10:17 -- common/autotest_common.sh@936 -- # '[' -z 3394774 ']' 00:14:16.846 16:10:17 -- common/autotest_common.sh@940 -- # kill -0 3394774 00:14:16.846 16:10:17 -- common/autotest_common.sh@941 -- # uname 00:14:16.846 16:10:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.846 16:10:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3394774 00:14:16.846 16:10:17 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:16.846 16:10:17 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:16.846 16:10:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3394774' 00:14:16.846 killing process with pid 3394774 00:14:16.846 16:10:17 -- common/autotest_common.sh@955 -- # kill 3394774 00:14:16.846 16:10:17 -- common/autotest_common.sh@960 -- # wait 3394774 00:14:17.103 16:10:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:17.103 16:10:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:17.103 16:10:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:17.103 16:10:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.103 16:10:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.103 16:10:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.103 16:10:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.103 16:10:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.648 16:10:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.648 00:14:19.648 real 0m6.359s 00:14:19.648 user 0m10.060s 00:14:19.648 sys 0m2.093s 00:14:19.648 16:10:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:19.648 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 ************************************ 00:14:19.648 END TEST nvmf_bdevio 00:14:19.648 ************************************ 00:14:19.648 16:10:20 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:19.648 16:10:20 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:19.648 16:10:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:19.648 16:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.648 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 ************************************ 00:14:19.648 START TEST nvmf_bdevio_no_huge 00:14:19.648 ************************************ 00:14:19.648 16:10:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:19.648 * Looking for test storage... 00:14:19.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.648 16:10:20 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.648 16:10:20 -- nvmf/common.sh@7 -- # uname -s 00:14:19.648 16:10:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.648 16:10:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.648 16:10:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.648 16:10:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.648 16:10:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.648 16:10:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.648 16:10:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.648 16:10:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.648 16:10:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.648 16:10:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.648 16:10:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.648 16:10:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.648 16:10:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.648 16:10:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.648 16:10:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.648 16:10:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.648 16:10:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.648 16:10:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.648 16:10:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.648 16:10:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.648 16:10:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.648 16:10:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.648 16:10:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.648 16:10:20 -- paths/export.sh@5 -- # export PATH 00:14:19.648 16:10:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.648 16:10:20 -- nvmf/common.sh@47 -- # : 0 00:14:19.648 16:10:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.648 16:10:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.648 16:10:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.648 16:10:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.648 16:10:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.648 16:10:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.648 16:10:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.648 16:10:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.648 16:10:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.648 16:10:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.648 16:10:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:19.648 16:10:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:19.648 16:10:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.648 16:10:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:19.648 16:10:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:19.648 16:10:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:19.648 16:10:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.648 16:10:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.648 16:10:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.648 16:10:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:19.648 16:10:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:19.648 16:10:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.648 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:21.648 16:10:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.648 16:10:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.648 16:10:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.648 16:10:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.648 16:10:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.648 16:10:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.648 16:10:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.648 16:10:22 -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.648 16:10:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.648 16:10:22 -- nvmf/common.sh@296 -- # e810=() 00:14:21.648 16:10:22 -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.648 16:10:22 -- nvmf/common.sh@297 -- # x722=() 00:14:21.648 16:10:22 -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.648 16:10:22 -- nvmf/common.sh@298 -- # mlx=() 00:14:21.648 16:10:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.648 16:10:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.648 16:10:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.648 16:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:21.648 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:21.648 16:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.648 16:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:21.648 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:21.648 16:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.648 16:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.648 16:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.648 16:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:21.648 Found net devices under 0000:09:00.0: cvl_0_0 00:14:21.648 16:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.648 16:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.648 16:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.648 16:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:21.648 Found net devices under 0000:09:00.1: cvl_0_1 00:14:21.648 16:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:21.648 16:10:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:21.648 16:10:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.648 16:10:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.648 16:10:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.648 16:10:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.648 16:10:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.648 16:10:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.648 16:10:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.648 16:10:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.648 16:10:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.648 16:10:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.648 16:10:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.648 16:10:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.648 16:10:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.648 16:10:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.648 16:10:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.648 16:10:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.648 16:10:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.648 16:10:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.648 16:10:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:14:21.648 00:14:21.648 --- 10.0.0.2 ping statistics --- 00:14:21.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.648 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:21.648 16:10:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:14:21.648 00:14:21.648 --- 10.0.0.1 ping statistics --- 00:14:21.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.648 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:21.648 16:10:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.648 16:10:22 -- nvmf/common.sh@411 -- # return 0 00:14:21.648 16:10:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:21.648 16:10:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.648 16:10:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:21.648 16:10:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.648 16:10:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:21.648 16:10:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:21.648 16:10:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:21.648 16:10:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:21.648 16:10:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:21.648 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.648 16:10:22 -- nvmf/common.sh@470 -- # nvmfpid=3397000 00:14:21.648 16:10:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:21.648 16:10:22 -- nvmf/common.sh@471 -- # waitforlisten 3397000 00:14:21.648 16:10:22 -- common/autotest_common.sh@817 -- # '[' -z 3397000 ']' 00:14:21.648 16:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.648 16:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:21.648 16:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.648 16:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:21.648 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.648 [2024-04-24 16:10:22.609329] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:21.648 [2024-04-24 16:10:22.609417] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:21.649 [2024-04-24 16:10:22.681957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.649 [2024-04-24 16:10:22.805138] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.649 [2024-04-24 16:10:22.805209] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.649 [2024-04-24 16:10:22.805226] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.649 [2024-04-24 16:10:22.805240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.649 [2024-04-24 16:10:22.805252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.649 [2024-04-24 16:10:22.805353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:21.649 [2024-04-24 16:10:22.805407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:21.649 [2024-04-24 16:10:22.805460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:21.649 [2024-04-24 16:10:22.805463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.649 16:10:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:21.649 16:10:22 -- common/autotest_common.sh@850 -- # return 0 00:14:21.649 16:10:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:21.649 16:10:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:21.649 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.649 16:10:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.649 16:10:22 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.649 16:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.649 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.907 [2024-04-24 16:10:22.933848] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.907 16:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.907 16:10:22 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.907 16:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.907 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.907 Malloc0 00:14:21.907 16:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.907 16:10:22 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.907 16:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.907 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.907 16:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.907 16:10:22 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.907 16:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.907 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.907 16:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.907 16:10:22 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.907 16:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.907 16:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.907 [2024-04-24 16:10:22.972238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.907 16:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.907 16:10:22 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:21.907 16:10:22 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:21.907 16:10:22 -- nvmf/common.sh@521 -- # config=() 00:14:21.907 16:10:22 -- nvmf/common.sh@521 -- # local subsystem config 00:14:21.907 16:10:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:21.907 16:10:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:21.907 { 00:14:21.907 "params": { 00:14:21.907 "name": "Nvme$subsystem", 00:14:21.907 "trtype": "$TEST_TRANSPORT", 00:14:21.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.907 "adrfam": "ipv4", 00:14:21.907 "trsvcid": "$NVMF_PORT", 00:14:21.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.907 "hdgst": ${hdgst:-false}, 00:14:21.907 "ddgst": ${ddgst:-false} 00:14:21.907 }, 00:14:21.907 "method": "bdev_nvme_attach_controller" 00:14:21.907 } 00:14:21.907 EOF 00:14:21.907 )") 00:14:21.907 16:10:22 -- nvmf/common.sh@543 -- # cat 00:14:21.907 16:10:22 -- nvmf/common.sh@545 -- # jq . 00:14:21.907 16:10:22 -- nvmf/common.sh@546 -- # IFS=, 00:14:21.907 16:10:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:21.907 "params": { 00:14:21.907 "name": "Nvme1", 00:14:21.907 "trtype": "tcp", 00:14:21.907 "traddr": "10.0.0.2", 00:14:21.907 "adrfam": "ipv4", 00:14:21.907 "trsvcid": "4420", 00:14:21.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.907 "hdgst": false, 00:14:21.907 "ddgst": false 00:14:21.907 }, 00:14:21.907 "method": "bdev_nvme_attach_controller" 00:14:21.907 }' 00:14:21.907 [2024-04-24 16:10:23.017813] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:21.907 [2024-04-24 16:10:23.017886] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3397030 ] 00:14:21.907 [2024-04-24 16:10:23.080875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:22.165 [2024-04-24 16:10:23.193580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.165 [2024-04-24 16:10:23.193628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.165 [2024-04-24 16:10:23.193631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.165 I/O targets: 00:14:22.165 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:22.165 00:14:22.165 00:14:22.165 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.165 http://cunit.sourceforge.net/ 00:14:22.165 00:14:22.165 00:14:22.165 Suite: bdevio tests on: Nvme1n1 00:14:22.165 Test: blockdev write read block ...passed 00:14:22.165 Test: blockdev write zeroes read block ...passed 00:14:22.423 Test: blockdev write zeroes read no split ...passed 00:14:22.423 Test: blockdev write zeroes read split ...passed 00:14:22.423 Test: blockdev write zeroes read split partial ...passed 00:14:22.423 Test: blockdev reset ...[2024-04-24 16:10:23.569244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:22.423 [2024-04-24 16:10:23.569348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8415c0 (9): Bad file descriptor 00:14:22.423 [2024-04-24 16:10:23.626432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:22.423 passed 00:14:22.423 Test: blockdev write read 8 blocks ...passed 00:14:22.423 Test: blockdev write read size > 128k ...passed 00:14:22.423 Test: blockdev write read invalid size ...passed 00:14:22.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.681 Test: blockdev write read max offset ...passed 00:14:22.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.681 Test: blockdev writev readv 8 blocks ...passed 00:14:22.681 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.681 Test: blockdev writev readv block ...passed 00:14:22.681 Test: blockdev writev readv size > 128k ...passed 00:14:22.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.681 Test: blockdev comparev and writev ...[2024-04-24 16:10:23.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.842101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.842134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.842151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.842529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.842578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.842600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.843000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.843025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.843058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.843075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.843460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.843485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.843507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:22.681 [2024-04-24 16:10:23.843522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:22.681 passed 00:14:22.681 Test: blockdev nvme passthru rw ...passed 00:14:22.681 Test: blockdev nvme passthru vendor specific ...[2024-04-24 16:10:23.927083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.681 [2024-04-24 16:10:23.927110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.927303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.681 [2024-04-24 16:10:23.927327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.927507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.681 [2024-04-24 16:10:23.927531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:22.681 [2024-04-24 16:10:23.927716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.681 [2024-04-24 16:10:23.927739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:22.681 passed 00:14:22.681 Test: blockdev nvme admin passthru ...passed 00:14:22.941 Test: blockdev copy ...passed 00:14:22.941 00:14:22.941 Run Summary: Type Total Ran Passed Failed Inactive 00:14:22.941 suites 1 1 n/a 0 0 00:14:22.941 tests 23 23 23 0 0 00:14:22.941 asserts 152 152 152 0 n/a 00:14:22.941 00:14:22.941 Elapsed time = 1.252 seconds 00:14:23.198 16:10:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.198 16:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:23.198 16:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:23.198 16:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:23.198 16:10:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:23.198 16:10:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:23.198 16:10:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:23.198 16:10:24 -- nvmf/common.sh@117 -- # sync 00:14:23.198 16:10:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.198 16:10:24 -- nvmf/common.sh@120 -- # set +e 00:14:23.198 16:10:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.198 16:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.198 rmmod nvme_tcp 00:14:23.198 rmmod nvme_fabrics 00:14:23.198 rmmod nvme_keyring 00:14:23.198 16:10:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.198 16:10:24 -- nvmf/common.sh@124 -- # set -e 00:14:23.198 16:10:24 -- nvmf/common.sh@125 -- # return 0 00:14:23.198 16:10:24 -- nvmf/common.sh@478 -- # '[' -n 3397000 ']' 00:14:23.198 16:10:24 -- nvmf/common.sh@479 -- # killprocess 3397000 00:14:23.198 16:10:24 -- common/autotest_common.sh@936 -- # '[' -z 3397000 ']' 00:14:23.198 16:10:24 -- common/autotest_common.sh@940 -- # kill -0 3397000 00:14:23.198 16:10:24 -- common/autotest_common.sh@941 -- # uname 00:14:23.198 16:10:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.198 16:10:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3397000 00:14:23.198 16:10:24 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:23.198 16:10:24 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:23.198 16:10:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3397000' 00:14:23.198 killing process with pid 3397000 00:14:23.198 16:10:24 -- common/autotest_common.sh@955 -- # kill 3397000 00:14:23.198 16:10:24 -- common/autotest_common.sh@960 -- # wait 3397000 00:14:23.765 16:10:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:23.765 16:10:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:23.765 16:10:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:23.765 16:10:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.765 16:10:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.765 16:10:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.765 16:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.765 16:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.667 16:10:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.667 00:14:25.667 real 0m6.445s 00:14:25.667 user 0m10.503s 00:14:25.667 sys 0m2.493s 00:14:25.667 16:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.667 16:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:25.667 ************************************ 00:14:25.667 END TEST nvmf_bdevio_no_huge 00:14:25.667 ************************************ 00:14:25.667 16:10:26 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:25.667 16:10:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.667 16:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.667 16:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:25.926 ************************************ 00:14:25.926 START TEST nvmf_tls 00:14:25.926 ************************************ 00:14:25.926 16:10:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:25.926 * Looking for test storage... 00:14:25.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.926 16:10:27 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.926 16:10:27 -- nvmf/common.sh@7 -- # uname -s 00:14:25.926 16:10:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.926 16:10:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.926 16:10:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.926 16:10:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.926 16:10:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.926 16:10:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.926 16:10:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.926 16:10:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.926 16:10:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.926 16:10:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.926 16:10:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.926 16:10:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.926 16:10:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.926 16:10:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.926 16:10:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.926 16:10:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.926 16:10:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.926 16:10:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.926 16:10:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.926 16:10:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.926 16:10:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.926 16:10:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.926 16:10:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.926 16:10:27 -- paths/export.sh@5 -- # export PATH 00:14:25.926 16:10:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.926 16:10:27 -- nvmf/common.sh@47 -- # : 0 00:14:25.926 16:10:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.926 16:10:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.926 16:10:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.926 16:10:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.926 16:10:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.926 16:10:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.926 16:10:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.926 16:10:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.926 16:10:27 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.926 16:10:27 -- target/tls.sh@62 -- # nvmftestinit 00:14:25.926 16:10:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:25.926 16:10:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.926 16:10:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:25.926 16:10:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:25.926 16:10:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:25.926 16:10:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.926 16:10:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.926 16:10:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.926 16:10:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:25.926 16:10:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:25.926 16:10:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.926 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:14:27.826 16:10:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.826 16:10:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.826 16:10:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.826 16:10:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.826 16:10:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.826 16:10:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.826 16:10:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.826 16:10:28 -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.826 16:10:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.826 16:10:28 -- nvmf/common.sh@296 -- # e810=() 00:14:27.826 16:10:28 -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.826 16:10:28 -- nvmf/common.sh@297 -- # x722=() 00:14:27.826 16:10:28 -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.826 16:10:28 -- nvmf/common.sh@298 -- # mlx=() 00:14:27.826 16:10:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.826 16:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.826 16:10:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.826 16:10:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.826 16:10:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.826 16:10:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.826 16:10:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.826 16:10:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.826 16:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.826 16:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:27.826 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:27.827 16:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.827 16:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:27.827 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:27.827 16:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.827 16:10:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.827 16:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.827 16:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.827 16:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.827 16:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:27.827 Found net devices under 0000:09:00.0: cvl_0_0 00:14:27.827 16:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.827 16:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.827 16:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.827 16:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.827 16:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.827 16:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:27.827 Found net devices under 0000:09:00.1: cvl_0_1 00:14:27.827 16:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.827 16:10:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:27.827 16:10:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:27.827 16:10:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:27.827 16:10:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:27.827 16:10:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.827 16:10:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.827 16:10:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.827 16:10:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.827 16:10:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.827 16:10:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.827 16:10:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.827 16:10:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.827 16:10:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.827 16:10:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.827 16:10:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.827 16:10:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.827 16:10:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.827 16:10:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.827 16:10:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.827 16:10:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.827 16:10:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.827 16:10:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.827 16:10:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.827 16:10:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:14:28.085 00:14:28.085 --- 10.0.0.2 ping statistics --- 00:14:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.085 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:28.085 16:10:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:14:28.085 00:14:28.085 --- 10.0.0.1 ping statistics --- 00:14:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.085 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:28.085 16:10:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.085 16:10:29 -- nvmf/common.sh@411 -- # return 0 00:14:28.085 16:10:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:28.085 16:10:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.085 16:10:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:28.085 16:10:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:28.085 16:10:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.085 16:10:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:28.085 16:10:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:28.085 16:10:29 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:28.085 16:10:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:28.085 16:10:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:28.085 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:28.085 16:10:29 -- nvmf/common.sh@470 -- # nvmfpid=3399105 00:14:28.085 16:10:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:28.085 16:10:29 -- nvmf/common.sh@471 -- # waitforlisten 3399105 00:14:28.085 16:10:29 -- common/autotest_common.sh@817 -- # '[' -z 3399105 ']' 00:14:28.085 16:10:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.085 16:10:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.085 16:10:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.086 16:10:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.086 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:28.086 [2024-04-24 16:10:29.190562] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:28.086 [2024-04-24 16:10:29.190658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.086 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.086 [2024-04-24 16:10:29.263510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.343 [2024-04-24 16:10:29.377780] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.343 [2024-04-24 16:10:29.377840] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.343 [2024-04-24 16:10:29.377858] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.343 [2024-04-24 16:10:29.377872] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.343 [2024-04-24 16:10:29.377884] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.343 [2024-04-24 16:10:29.377918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.907 16:10:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.907 16:10:30 -- common/autotest_common.sh@850 -- # return 0 00:14:28.907 16:10:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:28.907 16:10:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:28.907 16:10:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.907 16:10:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.907 16:10:30 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:28.907 16:10:30 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:29.165 true 00:14:29.165 16:10:30 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:29.165 16:10:30 -- target/tls.sh@73 -- # jq -r .tls_version 00:14:29.422 16:10:30 -- target/tls.sh@73 -- # version=0 00:14:29.422 16:10:30 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:29.422 16:10:30 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:29.680 16:10:30 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:29.680 16:10:30 -- target/tls.sh@81 -- # jq -r .tls_version 00:14:29.938 16:10:31 -- target/tls.sh@81 -- # version=13 00:14:29.938 16:10:31 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:29.938 16:10:31 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:30.196 16:10:31 -- target/tls.sh@89 -- # jq -r .tls_version 00:14:30.196 16:10:31 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:30.455 16:10:31 -- target/tls.sh@89 -- # version=7 00:14:30.455 16:10:31 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:30.455 16:10:31 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:30.455 16:10:31 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:30.714 16:10:31 -- target/tls.sh@96 -- # ktls=false 00:14:30.714 16:10:31 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:30.714 16:10:31 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:30.973 16:10:32 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:30.973 16:10:32 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:31.231 16:10:32 -- target/tls.sh@104 -- # ktls=true 00:14:31.231 16:10:32 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:31.231 16:10:32 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:31.490 16:10:32 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:31.490 16:10:32 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:31.749 16:10:32 -- target/tls.sh@112 -- # ktls=false 00:14:31.749 16:10:32 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:31.749 16:10:32 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:31.749 16:10:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:31.749 16:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # digest=1 00:14:31.749 16:10:32 -- nvmf/common.sh@694 -- # python - 00:14:31.749 16:10:32 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:31.749 16:10:32 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:31.749 16:10:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:31.749 16:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:14:31.749 16:10:32 -- nvmf/common.sh@693 -- # digest=1 00:14:31.749 16:10:32 -- nvmf/common.sh@694 -- # python - 00:14:31.749 16:10:33 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:31.749 16:10:33 -- target/tls.sh@121 -- # mktemp 00:14:31.749 16:10:33 -- target/tls.sh@121 -- # key_path=/tmp/tmp.RQVvMpA68A 00:14:31.749 16:10:33 -- target/tls.sh@122 -- # mktemp 00:14:31.749 16:10:33 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.O4wjryNhWG 00:14:31.749 16:10:33 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:31.749 16:10:33 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:31.749 16:10:33 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RQVvMpA68A 00:14:31.749 16:10:33 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.O4wjryNhWG 00:14:31.749 16:10:33 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:32.008 16:10:33 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:32.574 16:10:33 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RQVvMpA68A 00:14:32.574 16:10:33 -- target/tls.sh@49 -- # local key=/tmp/tmp.RQVvMpA68A 00:14:32.574 16:10:33 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:32.574 [2024-04-24 16:10:33.819926] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.574 16:10:33 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:32.832 16:10:34 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:33.090 [2024-04-24 16:10:34.289215] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:33.090 [2024-04-24 16:10:34.289475] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.090 16:10:34 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:33.348 malloc0 00:14:33.348 16:10:34 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:33.606 16:10:34 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RQVvMpA68A 00:14:33.864 [2024-04-24 16:10:35.110250] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:33.864 16:10:35 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RQVvMpA68A 00:14:33.864 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.061 Initializing NVMe Controllers 00:14:46.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:46.061 Initialization complete. Launching workers. 00:14:46.061 ======================================================== 00:14:46.061 Latency(us) 00:14:46.061 Device Information : IOPS MiB/s Average min max 00:14:46.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7910.29 30.90 8093.41 1246.02 10306.41 00:14:46.061 ======================================================== 00:14:46.061 Total : 7910.29 30.90 8093.41 1246.02 10306.41 00:14:46.061 00:14:46.061 16:10:45 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RQVvMpA68A 00:14:46.061 16:10:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:46.061 16:10:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:46.061 16:10:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:46.061 16:10:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RQVvMpA68A' 00:14:46.061 16:10:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.061 16:10:45 -- target/tls.sh@28 -- # bdevperf_pid=3401123 00:14:46.061 16:10:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.061 16:10:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.061 16:10:45 -- target/tls.sh@31 -- # waitforlisten 3401123 /var/tmp/bdevperf.sock 00:14:46.061 16:10:45 -- common/autotest_common.sh@817 -- # '[' -z 3401123 ']' 00:14:46.061 16:10:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.061 16:10:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.061 16:10:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.061 16:10:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.061 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.061 [2024-04-24 16:10:45.268261] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:46.061 [2024-04-24 16:10:45.268336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401123 ] 00:14:46.061 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.061 [2024-04-24 16:10:45.324814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.061 [2024-04-24 16:10:45.423402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.061 16:10:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.061 16:10:45 -- common/autotest_common.sh@850 -- # return 0 00:14:46.061 16:10:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RQVvMpA68A 00:14:46.061 [2024-04-24 16:10:45.745521] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.061 [2024-04-24 16:10:45.745627] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:46.061 TLSTESTn1 00:14:46.061 16:10:45 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:46.061 Running I/O for 10 seconds... 00:14:56.024 00:14:56.024 Latency(us) 00:14:56.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.024 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:56.024 Verification LBA range: start 0x0 length 0x2000 00:14:56.024 TLSTESTn1 : 10.02 1241.35 4.85 0.00 0.00 102932.70 2936.98 104080.88 00:14:56.024 =================================================================================================================== 00:14:56.024 Total : 1241.35 4.85 0.00 0.00 102932.70 2936.98 104080.88 00:14:56.024 0 00:14:56.024 16:10:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.024 16:10:55 -- target/tls.sh@45 -- # killprocess 3401123 00:14:56.024 16:10:55 -- common/autotest_common.sh@936 -- # '[' -z 3401123 ']' 00:14:56.024 16:10:55 -- common/autotest_common.sh@940 -- # kill -0 3401123 00:14:56.024 16:10:55 -- common/autotest_common.sh@941 -- # uname 00:14:56.024 16:10:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.024 16:10:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3401123 00:14:56.024 16:10:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:56.024 16:10:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:56.024 16:10:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3401123' 00:14:56.024 killing process with pid 3401123 00:14:56.024 16:10:56 -- common/autotest_common.sh@955 -- # kill 3401123 00:14:56.024 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.024 00:14:56.024 Latency(us) 00:14:56.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.024 =================================================================================================================== 00:14:56.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.024 [2024-04-24 16:10:56.011875] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:56.024 16:10:56 -- common/autotest_common.sh@960 -- # wait 3401123 00:14:56.024 16:10:56 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O4wjryNhWG 00:14:56.024 16:10:56 -- common/autotest_common.sh@638 -- # local es=0 00:14:56.024 16:10:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O4wjryNhWG 00:14:56.024 16:10:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:56.024 16:10:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.024 16:10:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:56.024 16:10:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.024 16:10:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O4wjryNhWG 00:14:56.024 16:10:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.024 16:10:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:56.024 16:10:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:56.024 16:10:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O4wjryNhWG' 00:14:56.024 16:10:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.024 16:10:56 -- target/tls.sh@28 -- # bdevperf_pid=3402330 00:14:56.024 16:10:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.024 16:10:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.024 16:10:56 -- target/tls.sh@31 -- # waitforlisten 3402330 /var/tmp/bdevperf.sock 00:14:56.024 16:10:56 -- common/autotest_common.sh@817 -- # '[' -z 3402330 ']' 00:14:56.024 16:10:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.024 16:10:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.024 16:10:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.024 16:10:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.024 16:10:56 -- common/autotest_common.sh@10 -- # set +x 00:14:56.024 [2024-04-24 16:10:56.319804] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:56.024 [2024-04-24 16:10:56.319896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402330 ] 00:14:56.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.024 [2024-04-24 16:10:56.378780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.024 [2024-04-24 16:10:56.480630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.024 16:10:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:56.024 16:10:56 -- common/autotest_common.sh@850 -- # return 0 00:14:56.024 16:10:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O4wjryNhWG 00:14:56.024 [2024-04-24 16:10:56.816436] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.024 [2024-04-24 16:10:56.816556] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:56.024 [2024-04-24 16:10:56.822680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:56.024 [2024-04-24 16:10:56.823296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe68230 (107): Transport endpoint is not connected 00:14:56.024 [2024-04-24 16:10:56.824287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe68230 (9): Bad file descriptor 00:14:56.024 [2024-04-24 16:10:56.825286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:56.024 [2024-04-24 16:10:56.825307] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:56.024 [2024-04-24 16:10:56.825320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:56.024 request: 00:14:56.024 { 00:14:56.024 "name": "TLSTEST", 00:14:56.024 "trtype": "tcp", 00:14:56.024 "traddr": "10.0.0.2", 00:14:56.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.024 "adrfam": "ipv4", 00:14:56.024 "trsvcid": "4420", 00:14:56.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.024 "psk": "/tmp/tmp.O4wjryNhWG", 00:14:56.024 "method": "bdev_nvme_attach_controller", 00:14:56.024 "req_id": 1 00:14:56.024 } 00:14:56.024 Got JSON-RPC error response 00:14:56.024 response: 00:14:56.024 { 00:14:56.024 "code": -32602, 00:14:56.024 "message": "Invalid parameters" 00:14:56.024 } 00:14:56.024 16:10:56 -- target/tls.sh@36 -- # killprocess 3402330 00:14:56.024 16:10:56 -- common/autotest_common.sh@936 -- # '[' -z 3402330 ']' 00:14:56.024 16:10:56 -- common/autotest_common.sh@940 -- # kill -0 3402330 00:14:56.024 16:10:56 -- common/autotest_common.sh@941 -- # uname 00:14:56.024 16:10:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.024 16:10:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402330 00:14:56.024 16:10:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:56.024 16:10:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:56.024 16:10:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402330' 00:14:56.024 killing process with pid 3402330 00:14:56.024 16:10:56 -- common/autotest_common.sh@955 -- # kill 3402330 00:14:56.024 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.024 00:14:56.024 Latency(us) 00:14:56.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.024 =================================================================================================================== 00:14:56.024 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.024 [2024-04-24 16:10:56.876409] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:56.024 16:10:56 -- common/autotest_common.sh@960 -- # wait 3402330 00:14:56.024 16:10:57 -- target/tls.sh@37 -- # return 1 00:14:56.024 16:10:57 -- common/autotest_common.sh@641 -- # es=1 00:14:56.024 16:10:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:56.024 16:10:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:56.024 16:10:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:56.024 16:10:57 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RQVvMpA68A 00:14:56.024 16:10:57 -- common/autotest_common.sh@638 -- # local es=0 00:14:56.024 16:10:57 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RQVvMpA68A 00:14:56.024 16:10:57 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:56.024 16:10:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.024 16:10:57 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:56.024 16:10:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.024 16:10:57 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RQVvMpA68A 00:14:56.024 16:10:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.024 16:10:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:56.024 16:10:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:56.024 16:10:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RQVvMpA68A' 00:14:56.024 16:10:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.024 16:10:57 -- target/tls.sh@28 -- # bdevperf_pid=3402468 00:14:56.024 16:10:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.024 16:10:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.024 16:10:57 -- target/tls.sh@31 -- # waitforlisten 3402468 /var/tmp/bdevperf.sock 00:14:56.024 16:10:57 -- common/autotest_common.sh@817 -- # '[' -z 3402468 ']' 00:14:56.024 16:10:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.024 16:10:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.024 16:10:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.024 16:10:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.024 16:10:57 -- common/autotest_common.sh@10 -- # set +x 00:14:56.024 [2024-04-24 16:10:57.166568] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:56.024 [2024-04-24 16:10:57.166657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402468 ] 00:14:56.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.024 [2024-04-24 16:10:57.232370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.281 [2024-04-24 16:10:57.339556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.281 16:10:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:56.281 16:10:57 -- common/autotest_common.sh@850 -- # return 0 00:14:56.281 16:10:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RQVvMpA68A 00:14:56.540 [2024-04-24 16:10:57.720668] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.540 [2024-04-24 16:10:57.720823] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:56.540 [2024-04-24 16:10:57.726021] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:56.540 [2024-04-24 16:10:57.726057] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:56.540 [2024-04-24 16:10:57.726112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:56.540 [2024-04-24 16:10:57.726678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120d230 (107): Transport endpoint is not connected 00:14:56.540 [2024-04-24 16:10:57.727668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120d230 (9): Bad file descriptor 00:14:56.540 [2024-04-24 16:10:57.728667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:56.540 [2024-04-24 16:10:57.728688] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:56.540 [2024-04-24 16:10:57.728702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:56.540 request: 00:14:56.540 { 00:14:56.540 "name": "TLSTEST", 00:14:56.540 "trtype": "tcp", 00:14:56.540 "traddr": "10.0.0.2", 00:14:56.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:56.540 "adrfam": "ipv4", 00:14:56.540 "trsvcid": "4420", 00:14:56.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.540 "psk": "/tmp/tmp.RQVvMpA68A", 00:14:56.540 "method": "bdev_nvme_attach_controller", 00:14:56.540 "req_id": 1 00:14:56.540 } 00:14:56.540 Got JSON-RPC error response 00:14:56.540 response: 00:14:56.540 { 00:14:56.540 "code": -32602, 00:14:56.540 "message": "Invalid parameters" 00:14:56.540 } 00:14:56.540 16:10:57 -- target/tls.sh@36 -- # killprocess 3402468 00:14:56.540 16:10:57 -- common/autotest_common.sh@936 -- # '[' -z 3402468 ']' 00:14:56.540 16:10:57 -- common/autotest_common.sh@940 -- # kill -0 3402468 00:14:56.540 16:10:57 -- common/autotest_common.sh@941 -- # uname 00:14:56.540 16:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.540 16:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402468 00:14:56.540 16:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:56.540 16:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:56.540 16:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402468' 00:14:56.540 killing process with pid 3402468 00:14:56.540 16:10:57 -- common/autotest_common.sh@955 -- # kill 3402468 00:14:56.540 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.540 00:14:56.540 Latency(us) 00:14:56.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.540 =================================================================================================================== 00:14:56.540 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.540 [2024-04-24 16:10:57.782461] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:56.540 16:10:57 -- common/autotest_common.sh@960 -- # wait 3402468 00:14:56.798 16:10:58 -- target/tls.sh@37 -- # return 1 00:14:56.798 16:10:58 -- common/autotest_common.sh@641 -- # es=1 00:14:56.798 16:10:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:56.798 16:10:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:56.798 16:10:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:56.798 16:10:58 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RQVvMpA68A 00:14:56.798 16:10:58 -- common/autotest_common.sh@638 -- # local es=0 00:14:56.798 16:10:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RQVvMpA68A 00:14:56.798 16:10:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:56.798 16:10:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.798 16:10:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:56.798 16:10:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.798 16:10:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RQVvMpA68A 00:14:56.798 16:10:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.798 16:10:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:56.798 16:10:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:56.798 16:10:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RQVvMpA68A' 00:14:56.798 16:10:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.799 16:10:58 -- target/tls.sh@28 -- # bdevperf_pid=3402600 00:14:56.799 16:10:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.799 16:10:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.799 16:10:58 -- target/tls.sh@31 -- # waitforlisten 3402600 /var/tmp/bdevperf.sock 00:14:56.799 16:10:58 -- common/autotest_common.sh@817 -- # '[' -z 3402600 ']' 00:14:56.799 16:10:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.799 16:10:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.799 16:10:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.799 16:10:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.799 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:56.799 [2024-04-24 16:10:58.081320] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:56.799 [2024-04-24 16:10:58.081409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402600 ] 00:14:57.057 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.057 [2024-04-24 16:10:58.141359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.057 [2024-04-24 16:10:58.251372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.315 16:10:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.315 16:10:58 -- common/autotest_common.sh@850 -- # return 0 00:14:57.315 16:10:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RQVvMpA68A 00:14:57.574 [2024-04-24 16:10:58.637544] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.574 [2024-04-24 16:10:58.637656] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:57.574 [2024-04-24 16:10:58.647242] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:57.574 [2024-04-24 16:10:58.647277] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:57.574 [2024-04-24 16:10:58.647331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:57.574 [2024-04-24 16:10:58.647484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2479230 (107): Transport endpoint is not connected 00:14:57.574 [2024-04-24 16:10:58.648474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2479230 (9): Bad file descriptor 00:14:57.574 [2024-04-24 16:10:58.649472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:57.574 [2024-04-24 16:10:58.649493] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:57.574 [2024-04-24 16:10:58.649505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:57.574 request: 00:14:57.574 { 00:14:57.574 "name": "TLSTEST", 00:14:57.574 "trtype": "tcp", 00:14:57.574 "traddr": "10.0.0.2", 00:14:57.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.574 "adrfam": "ipv4", 00:14:57.574 "trsvcid": "4420", 00:14:57.574 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:57.574 "psk": "/tmp/tmp.RQVvMpA68A", 00:14:57.574 "method": "bdev_nvme_attach_controller", 00:14:57.574 "req_id": 1 00:14:57.574 } 00:14:57.574 Got JSON-RPC error response 00:14:57.574 response: 00:14:57.574 { 00:14:57.574 "code": -32602, 00:14:57.574 "message": "Invalid parameters" 00:14:57.574 } 00:14:57.574 16:10:58 -- target/tls.sh@36 -- # killprocess 3402600 00:14:57.574 16:10:58 -- common/autotest_common.sh@936 -- # '[' -z 3402600 ']' 00:14:57.574 16:10:58 -- common/autotest_common.sh@940 -- # kill -0 3402600 00:14:57.574 16:10:58 -- common/autotest_common.sh@941 -- # uname 00:14:57.574 16:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.574 16:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402600 00:14:57.574 16:10:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:57.574 16:10:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:57.574 16:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402600' 00:14:57.574 killing process with pid 3402600 00:14:57.574 16:10:58 -- common/autotest_common.sh@955 -- # kill 3402600 00:14:57.574 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.574 00:14:57.574 Latency(us) 00:14:57.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.574 =================================================================================================================== 00:14:57.574 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:57.574 [2024-04-24 16:10:58.702583] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:57.574 16:10:58 -- common/autotest_common.sh@960 -- # wait 3402600 00:14:57.833 16:10:58 -- target/tls.sh@37 -- # return 1 00:14:57.833 16:10:58 -- common/autotest_common.sh@641 -- # es=1 00:14:57.833 16:10:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:57.833 16:10:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:57.833 16:10:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:57.833 16:10:58 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:57.833 16:10:58 -- common/autotest_common.sh@638 -- # local es=0 00:14:57.833 16:10:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:57.833 16:10:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:57.833 16:10:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.833 16:10:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:57.833 16:10:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.833 16:10:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:57.833 16:10:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:57.833 16:10:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:57.833 16:10:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:57.833 16:10:58 -- target/tls.sh@23 -- # psk= 00:14:57.833 16:10:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:57.833 16:10:58 -- target/tls.sh@28 -- # bdevperf_pid=3402734 00:14:57.833 16:10:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:57.833 16:10:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:57.833 16:10:58 -- target/tls.sh@31 -- # waitforlisten 3402734 /var/tmp/bdevperf.sock 00:14:57.833 16:10:58 -- common/autotest_common.sh@817 -- # '[' -z 3402734 ']' 00:14:57.833 16:10:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.833 16:10:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.833 16:10:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.833 16:10:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.833 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:57.833 [2024-04-24 16:10:58.991781] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:57.833 [2024-04-24 16:10:58.991871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402734 ] 00:14:57.833 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.833 [2024-04-24 16:10:59.048534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.092 [2024-04-24 16:10:59.145566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.092 16:10:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.092 16:10:59 -- common/autotest_common.sh@850 -- # return 0 00:14:58.092 16:10:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:58.350 [2024-04-24 16:10:59.479578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:58.350 [2024-04-24 16:10:59.481409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4ba0 (9): Bad file descriptor 00:14:58.350 [2024-04-24 16:10:59.482404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:58.350 [2024-04-24 16:10:59.482425] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:58.350 [2024-04-24 16:10:59.482437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:58.350 request: 00:14:58.350 { 00:14:58.350 "name": "TLSTEST", 00:14:58.350 "trtype": "tcp", 00:14:58.350 "traddr": "10.0.0.2", 00:14:58.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.350 "adrfam": "ipv4", 00:14:58.350 "trsvcid": "4420", 00:14:58.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.350 "method": "bdev_nvme_attach_controller", 00:14:58.350 "req_id": 1 00:14:58.350 } 00:14:58.350 Got JSON-RPC error response 00:14:58.350 response: 00:14:58.350 { 00:14:58.350 "code": -32602, 00:14:58.350 "message": "Invalid parameters" 00:14:58.350 } 00:14:58.350 16:10:59 -- target/tls.sh@36 -- # killprocess 3402734 00:14:58.350 16:10:59 -- common/autotest_common.sh@936 -- # '[' -z 3402734 ']' 00:14:58.350 16:10:59 -- common/autotest_common.sh@940 -- # kill -0 3402734 00:14:58.350 16:10:59 -- common/autotest_common.sh@941 -- # uname 00:14:58.350 16:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.350 16:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402734 00:14:58.350 16:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:58.350 16:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:58.350 16:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402734' 00:14:58.350 killing process with pid 3402734 00:14:58.350 16:10:59 -- common/autotest_common.sh@955 -- # kill 3402734 00:14:58.350 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.350 00:14:58.350 Latency(us) 00:14:58.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.350 =================================================================================================================== 00:14:58.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:58.350 16:10:59 -- common/autotest_common.sh@960 -- # wait 3402734 00:14:58.609 16:10:59 -- target/tls.sh@37 -- # return 1 00:14:58.609 16:10:59 -- common/autotest_common.sh@641 -- # es=1 00:14:58.609 16:10:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:58.609 16:10:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:58.609 16:10:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:58.609 16:10:59 -- target/tls.sh@158 -- # killprocess 3399105 00:14:58.609 16:10:59 -- common/autotest_common.sh@936 -- # '[' -z 3399105 ']' 00:14:58.609 16:10:59 -- common/autotest_common.sh@940 -- # kill -0 3399105 00:14:58.609 16:10:59 -- common/autotest_common.sh@941 -- # uname 00:14:58.609 16:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.609 16:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3399105 00:14:58.609 16:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:58.609 16:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:58.609 16:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3399105' 00:14:58.609 killing process with pid 3399105 00:14:58.609 16:10:59 -- common/autotest_common.sh@955 -- # kill 3399105 00:14:58.609 [2024-04-24 16:10:59.817148] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:58.609 16:10:59 -- common/autotest_common.sh@960 -- # wait 3399105 00:14:58.867 16:11:00 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:58.867 16:11:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:58.867 16:11:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:58.867 16:11:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:58.867 16:11:00 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:58.867 16:11:00 -- nvmf/common.sh@693 -- # digest=2 00:14:58.867 16:11:00 -- nvmf/common.sh@694 -- # python - 00:14:59.126 16:11:00 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:59.126 16:11:00 -- target/tls.sh@160 -- # mktemp 00:14:59.126 16:11:00 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.FeaQxlhqyw 00:14:59.126 16:11:00 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:59.126 16:11:00 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.FeaQxlhqyw 00:14:59.126 16:11:00 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:59.126 16:11:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:59.126 16:11:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:59.126 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 16:11:00 -- nvmf/common.sh@470 -- # nvmfpid=3402893 00:14:59.126 16:11:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.126 16:11:00 -- nvmf/common.sh@471 -- # waitforlisten 3402893 00:14:59.126 16:11:00 -- common/autotest_common.sh@817 -- # '[' -z 3402893 ']' 00:14:59.126 16:11:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.126 16:11:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:59.126 16:11:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.126 16:11:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:59.126 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 [2024-04-24 16:11:00.224868] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:14:59.126 [2024-04-24 16:11:00.224979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.126 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.126 [2024-04-24 16:11:00.294695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.126 [2024-04-24 16:11:00.404409] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.126 [2024-04-24 16:11:00.404479] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.126 [2024-04-24 16:11:00.404506] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.126 [2024-04-24 16:11:00.404521] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.126 [2024-04-24 16:11:00.404534] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.126 [2024-04-24 16:11:00.404581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.384 16:11:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.384 16:11:00 -- common/autotest_common.sh@850 -- # return 0 00:14:59.384 16:11:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:59.384 16:11:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.384 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:14:59.384 16:11:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.384 16:11:00 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:14:59.384 16:11:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.FeaQxlhqyw 00:14:59.384 16:11:00 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:59.641 [2024-04-24 16:11:00.764900] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.641 16:11:00 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:59.899 16:11:01 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:00.156 [2024-04-24 16:11:01.254227] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.157 [2024-04-24 16:11:01.254500] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.157 16:11:01 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:00.418 malloc0 00:15:00.418 16:11:01 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:00.694 16:11:01 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:00.965 [2024-04-24 16:11:02.027906] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:00.965 16:11:02 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FeaQxlhqyw 00:15:00.965 16:11:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.965 16:11:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.965 16:11:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.965 16:11:02 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FeaQxlhqyw' 00:15:00.965 16:11:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.965 16:11:02 -- target/tls.sh@28 -- # bdevperf_pid=3403056 00:15:00.965 16:11:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.965 16:11:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.965 16:11:02 -- target/tls.sh@31 -- # waitforlisten 3403056 /var/tmp/bdevperf.sock 00:15:00.965 16:11:02 -- common/autotest_common.sh@817 -- # '[' -z 3403056 ']' 00:15:00.965 16:11:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.965 16:11:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.965 16:11:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.965 16:11:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.965 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:15:00.965 [2024-04-24 16:11:02.079818] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:00.965 [2024-04-24 16:11:02.079886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403056 ] 00:15:00.965 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.965 [2024-04-24 16:11:02.137540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.965 [2024-04-24 16:11:02.240870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.224 16:11:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.224 16:11:02 -- common/autotest_common.sh@850 -- # return 0 00:15:01.224 16:11:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:01.482 [2024-04-24 16:11:02.588937] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.482 [2024-04-24 16:11:02.589058] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:01.482 TLSTESTn1 00:15:01.482 16:11:02 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:01.742 Running I/O for 10 seconds... 00:15:11.703 00:15:11.703 Latency(us) 00:15:11.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:11.703 Verification LBA range: start 0x0 length 0x2000 00:15:11.703 TLSTESTn1 : 10.04 2858.16 11.16 0.00 0.00 44673.52 7524.50 87769.69 00:15:11.703 =================================================================================================================== 00:15:11.703 Total : 2858.16 11.16 0.00 0.00 44673.52 7524.50 87769.69 00:15:11.703 0 00:15:11.703 16:11:12 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.703 16:11:12 -- target/tls.sh@45 -- # killprocess 3403056 00:15:11.703 16:11:12 -- common/autotest_common.sh@936 -- # '[' -z 3403056 ']' 00:15:11.703 16:11:12 -- common/autotest_common.sh@940 -- # kill -0 3403056 00:15:11.703 16:11:12 -- common/autotest_common.sh@941 -- # uname 00:15:11.703 16:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.703 16:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3403056 00:15:11.703 16:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:11.703 16:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:11.703 16:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3403056' 00:15:11.703 killing process with pid 3403056 00:15:11.703 16:11:12 -- common/autotest_common.sh@955 -- # kill 3403056 00:15:11.703 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.703 00:15:11.703 Latency(us) 00:15:11.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.703 =================================================================================================================== 00:15:11.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.703 [2024-04-24 16:11:12.893477] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:11.703 16:11:12 -- common/autotest_common.sh@960 -- # wait 3403056 00:15:11.960 16:11:13 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.FeaQxlhqyw 00:15:11.960 16:11:13 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FeaQxlhqyw 00:15:11.960 16:11:13 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.960 16:11:13 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FeaQxlhqyw 00:15:11.960 16:11:13 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:11.960 16:11:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.960 16:11:13 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:11.960 16:11:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.960 16:11:13 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FeaQxlhqyw 00:15:11.960 16:11:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.960 16:11:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.960 16:11:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.960 16:11:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FeaQxlhqyw' 00:15:11.960 16:11:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.960 16:11:13 -- target/tls.sh@28 -- # bdevperf_pid=3404376 00:15:11.960 16:11:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.960 16:11:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.960 16:11:13 -- target/tls.sh@31 -- # waitforlisten 3404376 /var/tmp/bdevperf.sock 00:15:11.960 16:11:13 -- common/autotest_common.sh@817 -- # '[' -z 3404376 ']' 00:15:11.960 16:11:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.960 16:11:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.960 16:11:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.960 16:11:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.960 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:15:11.960 [2024-04-24 16:11:13.198805] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:11.960 [2024-04-24 16:11:13.198893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404376 ] 00:15:11.960 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.218 [2024-04-24 16:11:13.266270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.218 [2024-04-24 16:11:13.373604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.218 16:11:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.218 16:11:13 -- common/autotest_common.sh@850 -- # return 0 00:15:12.218 16:11:13 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:12.475 [2024-04-24 16:11:13.709253] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.475 [2024-04-24 16:11:13.709327] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:12.475 [2024-04-24 16:11:13.709342] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.FeaQxlhqyw 00:15:12.475 request: 00:15:12.475 { 00:15:12.475 "name": "TLSTEST", 00:15:12.475 "trtype": "tcp", 00:15:12.475 "traddr": "10.0.0.2", 00:15:12.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.475 "adrfam": "ipv4", 00:15:12.475 "trsvcid": "4420", 00:15:12.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.475 "psk": "/tmp/tmp.FeaQxlhqyw", 00:15:12.475 "method": "bdev_nvme_attach_controller", 00:15:12.475 "req_id": 1 00:15:12.475 } 00:15:12.475 Got JSON-RPC error response 00:15:12.475 response: 00:15:12.475 { 00:15:12.475 "code": -1, 00:15:12.475 "message": "Operation not permitted" 00:15:12.475 } 00:15:12.475 16:11:13 -- target/tls.sh@36 -- # killprocess 3404376 00:15:12.475 16:11:13 -- common/autotest_common.sh@936 -- # '[' -z 3404376 ']' 00:15:12.475 16:11:13 -- common/autotest_common.sh@940 -- # kill -0 3404376 00:15:12.475 16:11:13 -- common/autotest_common.sh@941 -- # uname 00:15:12.475 16:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.475 16:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3404376 00:15:12.475 16:11:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:12.475 16:11:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:12.475 16:11:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3404376' 00:15:12.475 killing process with pid 3404376 00:15:12.475 16:11:13 -- common/autotest_common.sh@955 -- # kill 3404376 00:15:12.475 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.475 00:15:12.475 Latency(us) 00:15:12.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.475 =================================================================================================================== 00:15:12.475 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.475 16:11:13 -- common/autotest_common.sh@960 -- # wait 3404376 00:15:12.733 16:11:13 -- target/tls.sh@37 -- # return 1 00:15:12.733 16:11:13 -- common/autotest_common.sh@641 -- # es=1 00:15:12.733 16:11:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:12.733 16:11:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:12.733 16:11:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:12.733 16:11:13 -- target/tls.sh@174 -- # killprocess 3402893 00:15:12.733 16:11:13 -- common/autotest_common.sh@936 -- # '[' -z 3402893 ']' 00:15:12.733 16:11:13 -- common/autotest_common.sh@940 -- # kill -0 3402893 00:15:12.733 16:11:13 -- common/autotest_common.sh@941 -- # uname 00:15:12.733 16:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.733 16:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402893 00:15:12.991 16:11:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.991 16:11:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.991 16:11:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402893' 00:15:12.991 killing process with pid 3402893 00:15:12.991 16:11:14 -- common/autotest_common.sh@955 -- # kill 3402893 00:15:12.991 [2024-04-24 16:11:14.023137] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:12.991 16:11:14 -- common/autotest_common.sh@960 -- # wait 3402893 00:15:13.249 16:11:14 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:13.249 16:11:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.249 16:11:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.249 16:11:14 -- common/autotest_common.sh@10 -- # set +x 00:15:13.249 16:11:14 -- nvmf/common.sh@470 -- # nvmfpid=3404522 00:15:13.249 16:11:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.249 16:11:14 -- nvmf/common.sh@471 -- # waitforlisten 3404522 00:15:13.249 16:11:14 -- common/autotest_common.sh@817 -- # '[' -z 3404522 ']' 00:15:13.249 16:11:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.249 16:11:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.249 16:11:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.249 16:11:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.249 16:11:14 -- common/autotest_common.sh@10 -- # set +x 00:15:13.249 [2024-04-24 16:11:14.362408] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:13.249 [2024-04-24 16:11:14.362488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.249 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.249 [2024-04-24 16:11:14.426313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.249 [2024-04-24 16:11:14.531007] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.249 [2024-04-24 16:11:14.531081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.249 [2024-04-24 16:11:14.531095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.249 [2024-04-24 16:11:14.531107] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.249 [2024-04-24 16:11:14.531117] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.249 [2024-04-24 16:11:14.531150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.508 16:11:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.508 16:11:14 -- common/autotest_common.sh@850 -- # return 0 00:15:13.508 16:11:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:13.508 16:11:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:13.508 16:11:14 -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 16:11:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.508 16:11:14 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:15:13.508 16:11:14 -- common/autotest_common.sh@638 -- # local es=0 00:15:13.508 16:11:14 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:15:13.508 16:11:14 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:15:13.508 16:11:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.508 16:11:14 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:15:13.508 16:11:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.508 16:11:14 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:15:13.508 16:11:14 -- target/tls.sh@49 -- # local key=/tmp/tmp.FeaQxlhqyw 00:15:13.508 16:11:14 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:13.765 [2024-04-24 16:11:14.929487] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.765 16:11:14 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.023 16:11:15 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:14.281 [2024-04-24 16:11:15.478965] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.281 [2024-04-24 16:11:15.479230] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.281 16:11:15 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:14.539 malloc0 00:15:14.539 16:11:15 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:14.797 16:11:16 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:15.053 [2024-04-24 16:11:16.284771] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:15.053 [2024-04-24 16:11:16.284812] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:15.053 [2024-04-24 16:11:16.284842] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:15:15.053 request: 00:15:15.053 { 00:15:15.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.053 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.053 "psk": "/tmp/tmp.FeaQxlhqyw", 00:15:15.053 "method": "nvmf_subsystem_add_host", 00:15:15.053 "req_id": 1 00:15:15.053 } 00:15:15.053 Got JSON-RPC error response 00:15:15.053 response: 00:15:15.053 { 00:15:15.053 "code": -32603, 00:15:15.053 "message": "Internal error" 00:15:15.053 } 00:15:15.053 16:11:16 -- common/autotest_common.sh@641 -- # es=1 00:15:15.053 16:11:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:15.053 16:11:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:15.053 16:11:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:15.053 16:11:16 -- target/tls.sh@180 -- # killprocess 3404522 00:15:15.053 16:11:16 -- common/autotest_common.sh@936 -- # '[' -z 3404522 ']' 00:15:15.053 16:11:16 -- common/autotest_common.sh@940 -- # kill -0 3404522 00:15:15.053 16:11:16 -- common/autotest_common.sh@941 -- # uname 00:15:15.053 16:11:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.053 16:11:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3404522 00:15:15.053 16:11:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:15.053 16:11:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:15.053 16:11:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3404522' 00:15:15.053 killing process with pid 3404522 00:15:15.053 16:11:16 -- common/autotest_common.sh@955 -- # kill 3404522 00:15:15.053 16:11:16 -- common/autotest_common.sh@960 -- # wait 3404522 00:15:15.618 16:11:16 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.FeaQxlhqyw 00:15:15.618 16:11:16 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:15.618 16:11:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:15.618 16:11:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:15.618 16:11:16 -- common/autotest_common.sh@10 -- # set +x 00:15:15.618 16:11:16 -- nvmf/common.sh@470 -- # nvmfpid=3404826 00:15:15.618 16:11:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.618 16:11:16 -- nvmf/common.sh@471 -- # waitforlisten 3404826 00:15:15.618 16:11:16 -- common/autotest_common.sh@817 -- # '[' -z 3404826 ']' 00:15:15.618 16:11:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.618 16:11:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.618 16:11:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.618 16:11:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.618 16:11:16 -- common/autotest_common.sh@10 -- # set +x 00:15:15.618 [2024-04-24 16:11:16.661396] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:15.618 [2024-04-24 16:11:16.661486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.618 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.618 [2024-04-24 16:11:16.728120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.618 [2024-04-24 16:11:16.839009] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.618 [2024-04-24 16:11:16.839100] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.618 [2024-04-24 16:11:16.839116] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.618 [2024-04-24 16:11:16.839140] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.618 [2024-04-24 16:11:16.839160] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.618 [2024-04-24 16:11:16.839196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.876 16:11:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.876 16:11:16 -- common/autotest_common.sh@850 -- # return 0 00:15:15.876 16:11:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:15.876 16:11:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:15.876 16:11:16 -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 16:11:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.876 16:11:16 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:15:15.876 16:11:16 -- target/tls.sh@49 -- # local key=/tmp/tmp.FeaQxlhqyw 00:15:15.876 16:11:16 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.134 [2024-04-24 16:11:17.198794] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.134 16:11:17 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:16.391 16:11:17 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:16.649 [2024-04-24 16:11:17.708110] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.649 [2024-04-24 16:11:17.708375] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.649 16:11:17 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:16.907 malloc0 00:15:16.907 16:11:17 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:17.165 16:11:18 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:17.423 [2024-04-24 16:11:18.461807] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:17.423 16:11:18 -- target/tls.sh@188 -- # bdevperf_pid=3405107 00:15:17.423 16:11:18 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.423 16:11:18 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.423 16:11:18 -- target/tls.sh@191 -- # waitforlisten 3405107 /var/tmp/bdevperf.sock 00:15:17.423 16:11:18 -- common/autotest_common.sh@817 -- # '[' -z 3405107 ']' 00:15:17.423 16:11:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.423 16:11:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.423 16:11:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.423 16:11:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.423 16:11:18 -- common/autotest_common.sh@10 -- # set +x 00:15:17.423 [2024-04-24 16:11:18.522960] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:17.423 [2024-04-24 16:11:18.523043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405107 ] 00:15:17.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.423 [2024-04-24 16:11:18.580031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.423 [2024-04-24 16:11:18.678120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.681 16:11:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.681 16:11:18 -- common/autotest_common.sh@850 -- # return 0 00:15:17.681 16:11:18 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:17.937 [2024-04-24 16:11:19.004902] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.937 [2024-04-24 16:11:19.005025] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.937 TLSTESTn1 00:15:17.937 16:11:19 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:18.195 16:11:19 -- target/tls.sh@196 -- # tgtconf='{ 00:15:18.195 "subsystems": [ 00:15:18.195 { 00:15:18.195 "subsystem": "keyring", 00:15:18.195 "config": [] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "iobuf", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "iobuf_set_options", 00:15:18.195 "params": { 00:15:18.195 "small_pool_count": 8192, 00:15:18.195 "large_pool_count": 1024, 00:15:18.195 "small_bufsize": 8192, 00:15:18.195 "large_bufsize": 135168 00:15:18.195 } 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "sock", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "sock_impl_set_options", 00:15:18.195 "params": { 00:15:18.195 "impl_name": "posix", 00:15:18.195 "recv_buf_size": 2097152, 00:15:18.195 "send_buf_size": 2097152, 00:15:18.195 "enable_recv_pipe": true, 00:15:18.195 "enable_quickack": false, 00:15:18.195 "enable_placement_id": 0, 00:15:18.195 "enable_zerocopy_send_server": true, 00:15:18.195 "enable_zerocopy_send_client": false, 00:15:18.195 "zerocopy_threshold": 0, 00:15:18.195 "tls_version": 0, 00:15:18.195 "enable_ktls": false 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "sock_impl_set_options", 00:15:18.195 "params": { 00:15:18.195 "impl_name": "ssl", 00:15:18.195 "recv_buf_size": 4096, 00:15:18.195 "send_buf_size": 4096, 00:15:18.195 "enable_recv_pipe": true, 00:15:18.195 "enable_quickack": false, 00:15:18.195 "enable_placement_id": 0, 00:15:18.195 "enable_zerocopy_send_server": true, 00:15:18.195 "enable_zerocopy_send_client": false, 00:15:18.195 "zerocopy_threshold": 0, 00:15:18.195 "tls_version": 0, 00:15:18.195 "enable_ktls": false 00:15:18.195 } 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "vmd", 00:15:18.195 "config": [] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "accel", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "accel_set_options", 00:15:18.195 "params": { 00:15:18.195 "small_cache_size": 128, 00:15:18.195 "large_cache_size": 16, 00:15:18.195 "task_count": 2048, 00:15:18.195 "sequence_count": 2048, 00:15:18.195 "buf_count": 2048 00:15:18.195 } 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "bdev", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "bdev_set_options", 00:15:18.195 "params": { 00:15:18.195 "bdev_io_pool_size": 65535, 00:15:18.195 "bdev_io_cache_size": 256, 00:15:18.195 "bdev_auto_examine": true, 00:15:18.195 "iobuf_small_cache_size": 128, 00:15:18.195 "iobuf_large_cache_size": 16 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_raid_set_options", 00:15:18.195 "params": { 00:15:18.195 "process_window_size_kb": 1024 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_iscsi_set_options", 00:15:18.195 "params": { 00:15:18.195 "timeout_sec": 30 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_nvme_set_options", 00:15:18.195 "params": { 00:15:18.195 "action_on_timeout": "none", 00:15:18.195 "timeout_us": 0, 00:15:18.195 "timeout_admin_us": 0, 00:15:18.195 "keep_alive_timeout_ms": 10000, 00:15:18.195 "arbitration_burst": 0, 00:15:18.195 "low_priority_weight": 0, 00:15:18.195 "medium_priority_weight": 0, 00:15:18.195 "high_priority_weight": 0, 00:15:18.195 "nvme_adminq_poll_period_us": 10000, 00:15:18.195 "nvme_ioq_poll_period_us": 0, 00:15:18.195 "io_queue_requests": 0, 00:15:18.195 "delay_cmd_submit": true, 00:15:18.195 "transport_retry_count": 4, 00:15:18.195 "bdev_retry_count": 3, 00:15:18.195 "transport_ack_timeout": 0, 00:15:18.195 "ctrlr_loss_timeout_sec": 0, 00:15:18.195 "reconnect_delay_sec": 0, 00:15:18.195 "fast_io_fail_timeout_sec": 0, 00:15:18.195 "disable_auto_failback": false, 00:15:18.195 "generate_uuids": false, 00:15:18.195 "transport_tos": 0, 00:15:18.195 "nvme_error_stat": false, 00:15:18.195 "rdma_srq_size": 0, 00:15:18.195 "io_path_stat": false, 00:15:18.195 "allow_accel_sequence": false, 00:15:18.195 "rdma_max_cq_size": 0, 00:15:18.195 "rdma_cm_event_timeout_ms": 0, 00:15:18.195 "dhchap_digests": [ 00:15:18.195 "sha256", 00:15:18.195 "sha384", 00:15:18.195 "sha512" 00:15:18.195 ], 00:15:18.195 "dhchap_dhgroups": [ 00:15:18.195 "null", 00:15:18.195 "ffdhe2048", 00:15:18.195 "ffdhe3072", 00:15:18.195 "ffdhe4096", 00:15:18.195 "ffdhe6144", 00:15:18.195 "ffdhe8192" 00:15:18.195 ] 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_nvme_set_hotplug", 00:15:18.195 "params": { 00:15:18.195 "period_us": 100000, 00:15:18.195 "enable": false 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_malloc_create", 00:15:18.195 "params": { 00:15:18.195 "name": "malloc0", 00:15:18.195 "num_blocks": 8192, 00:15:18.195 "block_size": 4096, 00:15:18.195 "physical_block_size": 4096, 00:15:18.195 "uuid": "0ef7c7fb-0b2d-4639-af0e-cd4e414ec035", 00:15:18.195 "optimal_io_boundary": 0 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "bdev_wait_for_examine" 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "nbd", 00:15:18.195 "config": [] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "scheduler", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "framework_set_scheduler", 00:15:18.195 "params": { 00:15:18.195 "name": "static" 00:15:18.195 } 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "subsystem": "nvmf", 00:15:18.195 "config": [ 00:15:18.195 { 00:15:18.195 "method": "nvmf_set_config", 00:15:18.195 "params": { 00:15:18.195 "discovery_filter": "match_any", 00:15:18.195 "admin_cmd_passthru": { 00:15:18.195 "identify_ctrlr": false 00:15:18.195 } 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "nvmf_set_max_subsystems", 00:15:18.195 "params": { 00:15:18.195 "max_subsystems": 1024 00:15:18.195 } 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "method": "nvmf_set_crdt", 00:15:18.195 "params": { 00:15:18.195 "crdt1": 0, 00:15:18.195 "crdt2": 0, 00:15:18.196 "crdt3": 0 00:15:18.196 } 00:15:18.196 }, 00:15:18.196 { 00:15:18.196 "method": "nvmf_create_transport", 00:15:18.196 "params": { 00:15:18.196 "trtype": "TCP", 00:15:18.196 "max_queue_depth": 128, 00:15:18.196 "max_io_qpairs_per_ctrlr": 127, 00:15:18.196 "in_capsule_data_size": 4096, 00:15:18.196 "max_io_size": 131072, 00:15:18.196 "io_unit_size": 131072, 00:15:18.196 "max_aq_depth": 128, 00:15:18.196 "num_shared_buffers": 511, 00:15:18.196 "buf_cache_size": 4294967295, 00:15:18.196 "dif_insert_or_strip": false, 00:15:18.196 "zcopy": false, 00:15:18.196 "c2h_success": false, 00:15:18.196 "sock_priority": 0, 00:15:18.196 "abort_timeout_sec": 1, 00:15:18.196 "ack_timeout": 0, 00:15:18.196 "data_wr_pool_size": 0 00:15:18.196 } 00:15:18.196 }, 00:15:18.196 { 00:15:18.196 "method": "nvmf_create_subsystem", 00:15:18.196 "params": { 00:15:18.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.196 "allow_any_host": false, 00:15:18.196 "serial_number": "SPDK00000000000001", 00:15:18.196 "model_number": "SPDK bdev Controller", 00:15:18.196 "max_namespaces": 10, 00:15:18.196 "min_cntlid": 1, 00:15:18.196 "max_cntlid": 65519, 00:15:18.196 "ana_reporting": false 00:15:18.196 } 00:15:18.196 }, 00:15:18.196 { 00:15:18.196 "method": "nvmf_subsystem_add_host", 00:15:18.196 "params": { 00:15:18.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.196 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.196 "psk": "/tmp/tmp.FeaQxlhqyw" 00:15:18.196 } 00:15:18.196 }, 00:15:18.196 { 00:15:18.196 "method": "nvmf_subsystem_add_ns", 00:15:18.196 "params": { 00:15:18.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.196 "namespace": { 00:15:18.196 "nsid": 1, 00:15:18.196 "bdev_name": "malloc0", 00:15:18.196 "nguid": "0EF7C7FB0B2D4639AF0ECD4E414EC035", 00:15:18.196 "uuid": "0ef7c7fb-0b2d-4639-af0e-cd4e414ec035", 00:15:18.196 "no_auto_visible": false 00:15:18.196 } 00:15:18.196 } 00:15:18.196 }, 00:15:18.196 { 00:15:18.196 "method": "nvmf_subsystem_add_listener", 00:15:18.196 "params": { 00:15:18.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.196 "listen_address": { 00:15:18.196 "trtype": "TCP", 00:15:18.196 "adrfam": "IPv4", 00:15:18.196 "traddr": "10.0.0.2", 00:15:18.196 "trsvcid": "4420" 00:15:18.196 }, 00:15:18.196 "secure_channel": true 00:15:18.196 } 00:15:18.196 } 00:15:18.196 ] 00:15:18.196 } 00:15:18.196 ] 00:15:18.196 }' 00:15:18.196 16:11:19 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:18.761 16:11:19 -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:18.761 "subsystems": [ 00:15:18.761 { 00:15:18.761 "subsystem": "keyring", 00:15:18.761 "config": [] 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "subsystem": "iobuf", 00:15:18.761 "config": [ 00:15:18.761 { 00:15:18.761 "method": "iobuf_set_options", 00:15:18.761 "params": { 00:15:18.761 "small_pool_count": 8192, 00:15:18.761 "large_pool_count": 1024, 00:15:18.761 "small_bufsize": 8192, 00:15:18.761 "large_bufsize": 135168 00:15:18.761 } 00:15:18.761 } 00:15:18.761 ] 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "subsystem": "sock", 00:15:18.761 "config": [ 00:15:18.761 { 00:15:18.761 "method": "sock_impl_set_options", 00:15:18.761 "params": { 00:15:18.761 "impl_name": "posix", 00:15:18.761 "recv_buf_size": 2097152, 00:15:18.761 "send_buf_size": 2097152, 00:15:18.761 "enable_recv_pipe": true, 00:15:18.761 "enable_quickack": false, 00:15:18.761 "enable_placement_id": 0, 00:15:18.761 "enable_zerocopy_send_server": true, 00:15:18.761 "enable_zerocopy_send_client": false, 00:15:18.761 "zerocopy_threshold": 0, 00:15:18.761 "tls_version": 0, 00:15:18.761 "enable_ktls": false 00:15:18.761 } 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "method": "sock_impl_set_options", 00:15:18.761 "params": { 00:15:18.761 "impl_name": "ssl", 00:15:18.761 "recv_buf_size": 4096, 00:15:18.761 "send_buf_size": 4096, 00:15:18.761 "enable_recv_pipe": true, 00:15:18.761 "enable_quickack": false, 00:15:18.761 "enable_placement_id": 0, 00:15:18.761 "enable_zerocopy_send_server": true, 00:15:18.761 "enable_zerocopy_send_client": false, 00:15:18.761 "zerocopy_threshold": 0, 00:15:18.761 "tls_version": 0, 00:15:18.761 "enable_ktls": false 00:15:18.761 } 00:15:18.761 } 00:15:18.761 ] 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "subsystem": "vmd", 00:15:18.761 "config": [] 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "subsystem": "accel", 00:15:18.761 "config": [ 00:15:18.761 { 00:15:18.761 "method": "accel_set_options", 00:15:18.761 "params": { 00:15:18.761 "small_cache_size": 128, 00:15:18.761 "large_cache_size": 16, 00:15:18.761 "task_count": 2048, 00:15:18.761 "sequence_count": 2048, 00:15:18.761 "buf_count": 2048 00:15:18.761 } 00:15:18.761 } 00:15:18.761 ] 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "subsystem": "bdev", 00:15:18.761 "config": [ 00:15:18.761 { 00:15:18.761 "method": "bdev_set_options", 00:15:18.761 "params": { 00:15:18.761 "bdev_io_pool_size": 65535, 00:15:18.761 "bdev_io_cache_size": 256, 00:15:18.761 "bdev_auto_examine": true, 00:15:18.761 "iobuf_small_cache_size": 128, 00:15:18.761 "iobuf_large_cache_size": 16 00:15:18.761 } 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "method": "bdev_raid_set_options", 00:15:18.761 "params": { 00:15:18.761 "process_window_size_kb": 1024 00:15:18.761 } 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "method": "bdev_iscsi_set_options", 00:15:18.761 "params": { 00:15:18.761 "timeout_sec": 30 00:15:18.761 } 00:15:18.761 }, 00:15:18.761 { 00:15:18.761 "method": "bdev_nvme_set_options", 00:15:18.761 "params": { 00:15:18.761 "action_on_timeout": "none", 00:15:18.761 "timeout_us": 0, 00:15:18.761 "timeout_admin_us": 0, 00:15:18.761 "keep_alive_timeout_ms": 10000, 00:15:18.761 "arbitration_burst": 0, 00:15:18.761 "low_priority_weight": 0, 00:15:18.761 "medium_priority_weight": 0, 00:15:18.761 "high_priority_weight": 0, 00:15:18.761 "nvme_adminq_poll_period_us": 10000, 00:15:18.761 "nvme_ioq_poll_period_us": 0, 00:15:18.761 "io_queue_requests": 512, 00:15:18.761 "delay_cmd_submit": true, 00:15:18.761 "transport_retry_count": 4, 00:15:18.761 "bdev_retry_count": 3, 00:15:18.761 "transport_ack_timeout": 0, 00:15:18.761 "ctrlr_loss_timeout_sec": 0, 00:15:18.761 "reconnect_delay_sec": 0, 00:15:18.761 "fast_io_fail_timeout_sec": 0, 00:15:18.761 "disable_auto_failback": false, 00:15:18.761 "generate_uuids": false, 00:15:18.761 "transport_tos": 0, 00:15:18.761 "nvme_error_stat": false, 00:15:18.762 "rdma_srq_size": 0, 00:15:18.762 "io_path_stat": false, 00:15:18.762 "allow_accel_sequence": false, 00:15:18.762 "rdma_max_cq_size": 0, 00:15:18.762 "rdma_cm_event_timeout_ms": 0, 00:15:18.762 "dhchap_digests": [ 00:15:18.762 "sha256", 00:15:18.762 "sha384", 00:15:18.762 "sha512" 00:15:18.762 ], 00:15:18.762 "dhchap_dhgroups": [ 00:15:18.762 "null", 00:15:18.762 "ffdhe2048", 00:15:18.762 "ffdhe3072", 00:15:18.762 "ffdhe4096", 00:15:18.762 "ffdhe6144", 00:15:18.762 "ffdhe8192" 00:15:18.762 ] 00:15:18.762 } 00:15:18.762 }, 00:15:18.762 { 00:15:18.762 "method": "bdev_nvme_attach_controller", 00:15:18.762 "params": { 00:15:18.762 "name": "TLSTEST", 00:15:18.762 "trtype": "TCP", 00:15:18.762 "adrfam": "IPv4", 00:15:18.762 "traddr": "10.0.0.2", 00:15:18.762 "trsvcid": "4420", 00:15:18.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.762 "prchk_reftag": false, 00:15:18.762 "prchk_guard": false, 00:15:18.762 "ctrlr_loss_timeout_sec": 0, 00:15:18.762 "reconnect_delay_sec": 0, 00:15:18.762 "fast_io_fail_timeout_sec": 0, 00:15:18.762 "psk": "/tmp/tmp.FeaQxlhqyw", 00:15:18.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.762 "hdgst": false, 00:15:18.762 "ddgst": false 00:15:18.762 } 00:15:18.762 }, 00:15:18.762 { 00:15:18.762 "method": "bdev_nvme_set_hotplug", 00:15:18.762 "params": { 00:15:18.762 "period_us": 100000, 00:15:18.762 "enable": false 00:15:18.762 } 00:15:18.762 }, 00:15:18.762 { 00:15:18.762 "method": "bdev_wait_for_examine" 00:15:18.762 } 00:15:18.762 ] 00:15:18.762 }, 00:15:18.762 { 00:15:18.762 "subsystem": "nbd", 00:15:18.762 "config": [] 00:15:18.762 } 00:15:18.762 ] 00:15:18.762 }' 00:15:18.762 16:11:19 -- target/tls.sh@199 -- # killprocess 3405107 00:15:18.762 16:11:19 -- common/autotest_common.sh@936 -- # '[' -z 3405107 ']' 00:15:18.762 16:11:19 -- common/autotest_common.sh@940 -- # kill -0 3405107 00:15:18.762 16:11:19 -- common/autotest_common.sh@941 -- # uname 00:15:18.762 16:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.762 16:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3405107 00:15:18.762 16:11:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:18.762 16:11:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:18.762 16:11:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3405107' 00:15:18.762 killing process with pid 3405107 00:15:18.762 16:11:19 -- common/autotest_common.sh@955 -- # kill 3405107 00:15:18.762 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.762 00:15:18.762 Latency(us) 00:15:18.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.762 =================================================================================================================== 00:15:18.762 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:18.762 [2024-04-24 16:11:19.840109] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:18.762 16:11:19 -- common/autotest_common.sh@960 -- # wait 3405107 00:15:19.019 16:11:20 -- target/tls.sh@200 -- # killprocess 3404826 00:15:19.019 16:11:20 -- common/autotest_common.sh@936 -- # '[' -z 3404826 ']' 00:15:19.019 16:11:20 -- common/autotest_common.sh@940 -- # kill -0 3404826 00:15:19.019 16:11:20 -- common/autotest_common.sh@941 -- # uname 00:15:19.019 16:11:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.019 16:11:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3404826 00:15:19.019 16:11:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:19.019 16:11:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:19.019 16:11:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3404826' 00:15:19.019 killing process with pid 3404826 00:15:19.019 16:11:20 -- common/autotest_common.sh@955 -- # kill 3404826 00:15:19.019 [2024-04-24 16:11:20.126873] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:19.019 16:11:20 -- common/autotest_common.sh@960 -- # wait 3404826 00:15:19.277 16:11:20 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:19.277 16:11:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:19.277 16:11:20 -- target/tls.sh@203 -- # echo '{ 00:15:19.277 "subsystems": [ 00:15:19.277 { 00:15:19.277 "subsystem": "keyring", 00:15:19.277 "config": [] 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "subsystem": "iobuf", 00:15:19.277 "config": [ 00:15:19.277 { 00:15:19.277 "method": "iobuf_set_options", 00:15:19.277 "params": { 00:15:19.277 "small_pool_count": 8192, 00:15:19.277 "large_pool_count": 1024, 00:15:19.277 "small_bufsize": 8192, 00:15:19.277 "large_bufsize": 135168 00:15:19.277 } 00:15:19.277 } 00:15:19.277 ] 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "subsystem": "sock", 00:15:19.277 "config": [ 00:15:19.277 { 00:15:19.277 "method": "sock_impl_set_options", 00:15:19.277 "params": { 00:15:19.277 "impl_name": "posix", 00:15:19.277 "recv_buf_size": 2097152, 00:15:19.277 "send_buf_size": 2097152, 00:15:19.277 "enable_recv_pipe": true, 00:15:19.277 "enable_quickack": false, 00:15:19.277 "enable_placement_id": 0, 00:15:19.277 "enable_zerocopy_send_server": true, 00:15:19.277 "enable_zerocopy_send_client": false, 00:15:19.277 "zerocopy_threshold": 0, 00:15:19.277 "tls_version": 0, 00:15:19.277 "enable_ktls": false 00:15:19.277 } 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "method": "sock_impl_set_options", 00:15:19.277 "params": { 00:15:19.277 "impl_name": "ssl", 00:15:19.277 "recv_buf_size": 4096, 00:15:19.277 "send_buf_size": 4096, 00:15:19.277 "enable_recv_pipe": true, 00:15:19.277 "enable_quickack": false, 00:15:19.277 "enable_placement_id": 0, 00:15:19.277 "enable_zerocopy_send_server": true, 00:15:19.277 "enable_zerocopy_send_client": false, 00:15:19.277 "zerocopy_threshold": 0, 00:15:19.277 "tls_version": 0, 00:15:19.277 "enable_ktls": false 00:15:19.277 } 00:15:19.277 } 00:15:19.277 ] 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "subsystem": "vmd", 00:15:19.277 "config": [] 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "subsystem": "accel", 00:15:19.277 "config": [ 00:15:19.277 { 00:15:19.277 "method": "accel_set_options", 00:15:19.277 "params": { 00:15:19.277 "small_cache_size": 128, 00:15:19.277 "large_cache_size": 16, 00:15:19.277 "task_count": 2048, 00:15:19.277 "sequence_count": 2048, 00:15:19.277 "buf_count": 2048 00:15:19.277 } 00:15:19.277 } 00:15:19.277 ] 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "subsystem": "bdev", 00:15:19.277 "config": [ 00:15:19.277 { 00:15:19.277 "method": "bdev_set_options", 00:15:19.277 "params": { 00:15:19.277 "bdev_io_pool_size": 65535, 00:15:19.277 "bdev_io_cache_size": 256, 00:15:19.277 "bdev_auto_examine": true, 00:15:19.277 "iobuf_small_cache_size": 128, 00:15:19.277 "iobuf_large_cache_size": 16 00:15:19.277 } 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "method": "bdev_raid_set_options", 00:15:19.277 "params": { 00:15:19.277 "process_window_size_kb": 1024 00:15:19.277 } 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "method": "bdev_iscsi_set_options", 00:15:19.277 "params": { 00:15:19.277 "timeout_sec": 30 00:15:19.277 } 00:15:19.277 }, 00:15:19.277 { 00:15:19.277 "method": "bdev_nvme_set_options", 00:15:19.277 "params": { 00:15:19.277 "action_on_timeout": "none", 00:15:19.277 "timeout_us": 0, 00:15:19.277 "timeout_admin_us": 0, 00:15:19.277 "keep_alive_timeout_ms": 10000, 00:15:19.277 "arbitration_burst": 0, 00:15:19.277 "low_priority_weight": 0, 00:15:19.277 "medium_priority_weight": 0, 00:15:19.277 "high_priority_weight": 0, 00:15:19.277 "nvme_adminq_poll_period_us": 10000, 00:15:19.277 "nvme_ioq_poll_period_us": 0, 00:15:19.277 "io_queue_requests": 0, 00:15:19.277 "delay_cmd_submit": true, 00:15:19.277 "transport_retry_count": 4, 00:15:19.277 "bdev_retry_count": 3, 00:15:19.277 "transport_ack_timeout": 0, 00:15:19.277 "ctrlr_loss_timeout_sec": 0, 00:15:19.277 "reconnect_delay_sec": 0, 00:15:19.277 "fast_io_fail_timeout_sec": 0, 00:15:19.277 "disable_auto_failback": false, 00:15:19.277 "generate_uuids": false, 00:15:19.277 "transport_tos": 0, 00:15:19.277 "nvme_error_stat": false, 00:15:19.277 "rdma_srq_size": 0, 00:15:19.277 "io_path_stat": false, 00:15:19.277 "allow_accel_sequence": false, 00:15:19.278 "rdma_max_cq_size": 0, 00:15:19.278 "rdma_cm_event_timeout_ms": 0, 00:15:19.278 "dhchap_digests": [ 00:15:19.278 "sha256", 00:15:19.278 "sha384", 00:15:19.278 "sha512" 00:15:19.278 ], 00:15:19.278 "dhchap_dhgroups": [ 00:15:19.278 "null", 00:15:19.278 "ffdhe2048", 00:15:19.278 "ffdhe3072", 00:15:19.278 "ffdhe4096", 00:15:19.278 "ffdhe6144", 00:15:19.278 "ffdhe8192" 00:15:19.278 ] 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "bdev_nvme_set_hotplug", 00:15:19.278 "params": { 00:15:19.278 "period_us": 100000, 00:15:19.278 "enable": false 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "bdev_malloc_create", 00:15:19.278 "params": { 00:15:19.278 "name": "malloc0", 00:15:19.278 "num_blocks": 8192, 00:15:19.278 "block_size": 4096, 00:15:19.278 "physical_block_size": 4096, 00:15:19.278 "uuid": "0ef7c7fb-0b2d-4639-af0e-cd4e414ec035", 00:15:19.278 "optimal_io_boundary": 0 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "bdev_wait_for_examine" 00:15:19.278 } 00:15:19.278 ] 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "subsystem": "nbd", 00:15:19.278 "config": [] 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "subsystem": "scheduler", 00:15:19.278 "config": [ 00:15:19.278 { 00:15:19.278 "method": "framework_set_scheduler", 00:15:19.278 "params": { 00:15:19.278 "name": "static" 00:15:19.278 } 00:15:19.278 } 00:15:19.278 ] 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "subsystem": "nvmf", 00:15:19.278 "config": [ 00:15:19.278 { 00:15:19.278 "method": "nvmf_set_config", 00:15:19.278 "params": { 00:15:19.278 "discovery_filter": "match_any", 00:15:19.278 "admin_cmd_passthru": { 00:15:19.278 "identify_ctrlr": false 00:15:19.278 } 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_set_max_subsystems", 00:15:19.278 "params": { 00:15:19.278 "max_subsystems": 1024 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_set_crdt", 00:15:19.278 "params": { 00:15:19.278 "crdt1": 0, 00:15:19.278 "crdt2": 0, 00:15:19.278 "crdt3": 0 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_create_transport", 00:15:19.278 "params": { 00:15:19.278 "trtype": "TCP", 00:15:19.278 "max_queue_depth": 128, 00:15:19.278 "max_io_qpairs_per_ctrlr": 127, 00:15:19.278 "in_capsule_data_size": 4096, 00:15:19.278 "max_io_size": 131072, 00:15:19.278 "io_unit_size": 131072, 00:15:19.278 "max_aq_depth": 128, 00:15:19.278 "num_shared_buffers": 511, 00:15:19.278 "buf_cache_size": 4294967295, 00:15:19.278 "dif_insert_or_strip": false, 00:15:19.278 "zcopy": false, 00:15:19.278 "c2h_success": false, 00:15:19.278 "sock_priority": 0, 00:15:19.278 "abort_timeout_sec": 1, 00:15:19.278 "ack_timeout": 0, 00:15:19.278 "data_wr_pool_size": 0 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_create_subsystem", 00:15:19.278 "params": { 00:15:19.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.278 "allow_any_host": false, 00:15:19.278 "serial_number": "SPDK00000000000001", 00:15:19.278 "model_number": "SPDK bdev Controller", 00:15:19.278 "max_namespaces": 10, 00:15:19.278 "min_cntlid": 1, 00:15:19.278 "max_cntlid": 65519, 00:15:19.278 "ana_reporting": false 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_subsystem_add_host", 00:15:19.278 "params": { 00:15:19.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.278 "host": "nqn.2016-06.io.spdk:host1", 00:15:19.278 "psk": "/tmp/tmp.FeaQxlhqyw" 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_subsystem_add_ns", 00:15:19.278 "params": { 00:15:19.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.278 "namespace": { 00:15:19.278 "nsid": 1, 00:15:19.278 "bdev_name": "malloc0", 00:15:19.278 "nguid": "0EF7C7FB0B2D4639AF0ECD4E414EC035", 00:15:19.278 "uuid": "0ef7c7fb-0b2d-4639-af0e-cd4e414ec035", 00:15:19.278 "no_auto_visible": false 00:15:19.278 } 00:15:19.278 } 00:15:19.278 }, 00:15:19.278 { 00:15:19.278 "method": "nvmf_subsystem_add_listener", 00:15:19.278 "params": { 00:15:19.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.278 "listen_address": { 00:15:19.278 "trtype": "TCP", 00:15:19.278 "adrfam": "IPv4", 00:15:19.278 "traddr": "10.0.0.2", 00:15:19.278 "trsvcid": "4420" 00:15:19.278 }, 00:15:19.278 "secure_channel": true 00:15:19.278 } 00:15:19.278 } 00:15:19.278 ] 00:15:19.278 } 00:15:19.278 ] 00:15:19.278 }' 00:15:19.278 16:11:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:19.278 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 16:11:20 -- nvmf/common.sh@470 -- # nvmfpid=3405383 00:15:19.278 16:11:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:19.278 16:11:20 -- nvmf/common.sh@471 -- # waitforlisten 3405383 00:15:19.278 16:11:20 -- common/autotest_common.sh@817 -- # '[' -z 3405383 ']' 00:15:19.278 16:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.278 16:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.278 16:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.278 16:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.278 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 [2024-04-24 16:11:20.474133] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:19.278 [2024-04-24 16:11:20.474231] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.278 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.278 [2024-04-24 16:11:20.543269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.536 [2024-04-24 16:11:20.653921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.536 [2024-04-24 16:11:20.653988] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.536 [2024-04-24 16:11:20.654015] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.536 [2024-04-24 16:11:20.654029] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.536 [2024-04-24 16:11:20.654041] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.536 [2024-04-24 16:11:20.654145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.794 [2024-04-24 16:11:20.887081] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.794 [2024-04-24 16:11:20.903027] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:19.794 [2024-04-24 16:11:20.919083] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.794 [2024-04-24 16:11:20.928993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.359 16:11:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.359 16:11:21 -- common/autotest_common.sh@850 -- # return 0 00:15:20.359 16:11:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:20.359 16:11:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:20.359 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:20.359 16:11:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.359 16:11:21 -- target/tls.sh@207 -- # bdevperf_pid=3405421 00:15:20.359 16:11:21 -- target/tls.sh@208 -- # waitforlisten 3405421 /var/tmp/bdevperf.sock 00:15:20.359 16:11:21 -- common/autotest_common.sh@817 -- # '[' -z 3405421 ']' 00:15:20.359 16:11:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.359 16:11:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.359 16:11:21 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:20.359 16:11:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.359 16:11:21 -- target/tls.sh@204 -- # echo '{ 00:15:20.359 "subsystems": [ 00:15:20.359 { 00:15:20.359 "subsystem": "keyring", 00:15:20.359 "config": [] 00:15:20.359 }, 00:15:20.359 { 00:15:20.359 "subsystem": "iobuf", 00:15:20.359 "config": [ 00:15:20.359 { 00:15:20.359 "method": "iobuf_set_options", 00:15:20.359 "params": { 00:15:20.359 "small_pool_count": 8192, 00:15:20.359 "large_pool_count": 1024, 00:15:20.359 "small_bufsize": 8192, 00:15:20.359 "large_bufsize": 135168 00:15:20.359 } 00:15:20.359 } 00:15:20.359 ] 00:15:20.359 }, 00:15:20.359 { 00:15:20.360 "subsystem": "sock", 00:15:20.360 "config": [ 00:15:20.360 { 00:15:20.360 "method": "sock_impl_set_options", 00:15:20.360 "params": { 00:15:20.360 "impl_name": "posix", 00:15:20.360 "recv_buf_size": 2097152, 00:15:20.360 "send_buf_size": 2097152, 00:15:20.360 "enable_recv_pipe": true, 00:15:20.360 "enable_quickack": false, 00:15:20.360 "enable_placement_id": 0, 00:15:20.360 "enable_zerocopy_send_server": true, 00:15:20.360 "enable_zerocopy_send_client": false, 00:15:20.360 "zerocopy_threshold": 0, 00:15:20.360 "tls_version": 0, 00:15:20.360 "enable_ktls": false 00:15:20.360 } 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "method": "sock_impl_set_options", 00:15:20.360 "params": { 00:15:20.360 "impl_name": "ssl", 00:15:20.360 "recv_buf_size": 4096, 00:15:20.360 "send_buf_size": 4096, 00:15:20.360 "enable_recv_pipe": true, 00:15:20.360 "enable_quickack": false, 00:15:20.360 "enable_placement_id": 0, 00:15:20.360 "enable_zerocopy_send_server": true, 00:15:20.360 "enable_zerocopy_send_client": false, 00:15:20.360 "zerocopy_threshold": 0, 00:15:20.360 "tls_version": 0, 00:15:20.360 "enable_ktls": false 00:15:20.360 } 00:15:20.360 } 00:15:20.360 ] 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "subsystem": "vmd", 00:15:20.360 "config": [] 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "subsystem": "accel", 00:15:20.360 "config": [ 00:15:20.360 { 00:15:20.360 "method": "accel_set_options", 00:15:20.360 "params": { 00:15:20.360 "small_cache_size": 128, 00:15:20.360 "large_cache_size": 16, 00:15:20.360 "task_count": 2048, 00:15:20.360 "sequence_count": 2048, 00:15:20.360 "buf_count": 2048 00:15:20.360 } 00:15:20.360 } 00:15:20.360 ] 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "subsystem": "bdev", 00:15:20.360 "config": [ 00:15:20.360 { 00:15:20.360 "method": "bdev_set_options", 00:15:20.360 "params": { 00:15:20.360 "bdev_io_pool_size": 65535, 00:15:20.360 "bdev_io_cache_size": 256, 00:15:20.360 "bdev_auto_examine": true, 00:15:20.360 "iobuf_small_cache_size": 128, 00:15:20.360 "iobuf_large_cache_size": 16 00:15:20.360 } 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "method": "bdev_raid_set_options", 00:15:20.360 "params": { 00:15:20.360 "process_window_size_kb": 1024 00:15:20.360 } 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "method": "bdev_iscsi_set_options", 00:15:20.360 "params": { 00:15:20.360 "timeout_sec": 30 00:15:20.360 } 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "method": "bdev_nvme_set_options", 00:15:20.360 "params": { 00:15:20.360 "action_on_timeout": "none", 00:15:20.360 "timeout_us": 0, 00:15:20.360 "timeout_admin_us": 0, 00:15:20.360 "keep_alive_timeout_ms": 10000, 00:15:20.360 "arbitration_burst": 0, 00:15:20.360 "low_priority_weight": 0, 00:15:20.360 "medium_priority_weight": 0, 00:15:20.360 "high_priority_weight": 0, 00:15:20.360 "nvme_adminq_poll_period_us": 10000, 00:15:20.360 "nvme_ioq_poll_period_us": 0, 00:15:20.360 "io_queue_requests": 512, 00:15:20.360 "delay_cmd_submit": true, 00:15:20.360 "transport_retry_count": 4, 00:15:20.360 "bdev_retry_count": 3, 00:15:20.360 "transport_ack_timeout": 0, 00:15:20.360 "ctrlr_loss_timeout_sec": 0, 00:15:20.360 "reconnect_delay_sec": 0, 00:15:20.360 "fast_io_fail_timeout_sec": 0, 00:15:20.360 "disable_auto_failback": false, 00:15:20.360 "generate_uuids": false, 00:15:20.360 "transport_tos": 0, 00:15:20.360 "nvme_error_stat": false, 00:15:20.360 "rdma_srq_size": 0, 00:15:20.360 "io_path_stat": false, 00:15:20.360 "allow_accel_sequence": false, 00:15:20.360 "rdma_max_cq_size": 0, 00:15:20.360 "rdma_cm_event_timeout_ms": 0, 00:15:20.360 "dhchap_digests": [ 00:15:20.360 "sha256", 00:15:20.360 "sha384", 00:15:20.360 "sha512" 00:15:20.360 ], 00:15:20.360 "dhchap_dhgroups": [ 00:15:20.360 "null", 00:15:20.360 "ffdhe2048", 00:15:20.360 "ffdhe3072", 00:15:20.360 "ffdhe4096", 00:15:20.360 "ffdhe6144", 00:15:20.360 "ffdhe8192" 00:15:20.360 ] 00:15:20.360 } 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "method": "bdev_nvme_attach_controller", 00:15:20.360 "params": { 00:15:20.360 "name": "TLSTEST", 00:15:20.360 "trtype": "TCP", 00:15:20.360 "adrfam": "IPv4", 00:15:20.360 "traddr": "10.0.0.2", 00:15:20.360 "trsvcid": "4420", 00:15:20.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.360 "prchk_reftag": false, 00:15:20.360 "prchk_guard": false, 00:15:20.360 "ctrlr_loss_timeout_sec": 0, 00:15:20.360 "reconnect_delay_sec": 0, 00:15:20.360 "fast_io_fail_timeout_sec": 0, 00:15:20.360 "psk": "/tmp/tmp.FeaQxlhqyw", 00:15:20.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.360 "hdgst": false, 00:15:20.360 "ddgst": false 00:15:20.360 } 00:15:20.361 }, 00:15:20.361 { 00:15:20.361 "method": "bdev_nvme_set_hotplug", 00:15:20.361 "params": { 00:15:20.361 "period_us": 100000, 00:15:20.361 "enable": false 00:15:20.361 } 00:15:20.361 }, 00:15:20.361 { 00:15:20.361 "method": "bdev_wait_for_examine" 00:15:20.361 } 00:15:20.361 ] 00:15:20.361 }, 00:15:20.361 { 00:15:20.361 "subsystem": "nbd", 00:15:20.361 "config": [] 00:15:20.361 } 00:15:20.361 ] 00:15:20.361 }' 00:15:20.361 16:11:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.361 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:20.361 [2024-04-24 16:11:21.450353] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:20.361 [2024-04-24 16:11:21.450441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405421 ] 00:15:20.361 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.361 [2024-04-24 16:11:21.513476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.361 [2024-04-24 16:11:21.618214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.618 [2024-04-24 16:11:21.780559] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.618 [2024-04-24 16:11:21.780692] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:21.182 16:11:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:21.182 16:11:22 -- common/autotest_common.sh@850 -- # return 0 00:15:21.182 16:11:22 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:21.439 Running I/O for 10 seconds... 00:15:31.403 00:15:31.403 Latency(us) 00:15:31.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.403 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:31.403 Verification LBA range: start 0x0 length 0x2000 00:15:31.403 TLSTESTn1 : 10.05 2492.46 9.74 0.00 0.00 51219.96 9466.31 80779.19 00:15:31.403 =================================================================================================================== 00:15:31.403 Total : 2492.46 9.74 0.00 0.00 51219.96 9466.31 80779.19 00:15:31.403 0 00:15:31.403 16:11:32 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:31.403 16:11:32 -- target/tls.sh@214 -- # killprocess 3405421 00:15:31.403 16:11:32 -- common/autotest_common.sh@936 -- # '[' -z 3405421 ']' 00:15:31.403 16:11:32 -- common/autotest_common.sh@940 -- # kill -0 3405421 00:15:31.403 16:11:32 -- common/autotest_common.sh@941 -- # uname 00:15:31.403 16:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.403 16:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3405421 00:15:31.403 16:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:31.403 16:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:31.403 16:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3405421' 00:15:31.403 killing process with pid 3405421 00:15:31.403 16:11:32 -- common/autotest_common.sh@955 -- # kill 3405421 00:15:31.403 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.403 00:15:31.403 Latency(us) 00:15:31.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.403 =================================================================================================================== 00:15:31.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.403 [2024-04-24 16:11:32.640655] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:31.403 16:11:32 -- common/autotest_common.sh@960 -- # wait 3405421 00:15:31.661 16:11:32 -- target/tls.sh@215 -- # killprocess 3405383 00:15:31.661 16:11:32 -- common/autotest_common.sh@936 -- # '[' -z 3405383 ']' 00:15:31.661 16:11:32 -- common/autotest_common.sh@940 -- # kill -0 3405383 00:15:31.661 16:11:32 -- common/autotest_common.sh@941 -- # uname 00:15:31.661 16:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.661 16:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3405383 00:15:31.661 16:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:31.661 16:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:31.661 16:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3405383' 00:15:31.661 killing process with pid 3405383 00:15:31.661 16:11:32 -- common/autotest_common.sh@955 -- # kill 3405383 00:15:31.661 [2024-04-24 16:11:32.916430] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:31.661 16:11:32 -- common/autotest_common.sh@960 -- # wait 3405383 00:15:32.227 16:11:33 -- target/tls.sh@218 -- # nvmfappstart 00:15:32.227 16:11:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:32.227 16:11:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:32.227 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.227 16:11:33 -- nvmf/common.sh@470 -- # nvmfpid=3406866 00:15:32.227 16:11:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.227 16:11:33 -- nvmf/common.sh@471 -- # waitforlisten 3406866 00:15:32.227 16:11:33 -- common/autotest_common.sh@817 -- # '[' -z 3406866 ']' 00:15:32.227 16:11:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.227 16:11:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:32.227 16:11:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.227 16:11:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:32.227 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.227 [2024-04-24 16:11:33.264576] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:32.227 [2024-04-24 16:11:33.264658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.227 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.227 [2024-04-24 16:11:33.329311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.227 [2024-04-24 16:11:33.430181] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.227 [2024-04-24 16:11:33.430239] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.227 [2024-04-24 16:11:33.430253] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.227 [2024-04-24 16:11:33.430264] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.227 [2024-04-24 16:11:33.430273] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.227 [2024-04-24 16:11:33.430327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.485 16:11:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:32.485 16:11:33 -- common/autotest_common.sh@850 -- # return 0 00:15:32.485 16:11:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:32.485 16:11:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:32.485 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.485 16:11:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.485 16:11:33 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.FeaQxlhqyw 00:15:32.485 16:11:33 -- target/tls.sh@49 -- # local key=/tmp/tmp.FeaQxlhqyw 00:15:32.485 16:11:33 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:32.743 [2024-04-24 16:11:33.787515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.743 16:11:33 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.001 16:11:34 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:33.001 [2024-04-24 16:11:34.272835] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.001 [2024-04-24 16:11:34.273104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.259 16:11:34 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.259 malloc0 00:15:33.259 16:11:34 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.825 16:11:34 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FeaQxlhqyw 00:15:33.825 [2024-04-24 16:11:35.066520] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:33.825 16:11:35 -- target/tls.sh@222 -- # bdevperf_pid=3407037 00:15:33.825 16:11:35 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:33.825 16:11:35 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:33.825 16:11:35 -- target/tls.sh@225 -- # waitforlisten 3407037 /var/tmp/bdevperf.sock 00:15:33.825 16:11:35 -- common/autotest_common.sh@817 -- # '[' -z 3407037 ']' 00:15:33.825 16:11:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.825 16:11:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.825 16:11:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.825 16:11:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.825 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:34.083 [2024-04-24 16:11:35.131452] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:34.083 [2024-04-24 16:11:35.131534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407037 ] 00:15:34.083 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.083 [2024-04-24 16:11:35.200391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.083 [2024-04-24 16:11:35.314605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.341 16:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:34.341 16:11:35 -- common/autotest_common.sh@850 -- # return 0 00:15:34.341 16:11:35 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FeaQxlhqyw 00:15:34.599 16:11:35 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:34.599 [2024-04-24 16:11:35.880337] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.857 nvme0n1 00:15:34.857 16:11:35 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.857 Running I/O for 1 seconds... 00:15:36.229 00:15:36.229 Latency(us) 00:15:36.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.229 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.229 Verification LBA range: start 0x0 length 0x2000 00:15:36.229 nvme0n1 : 1.05 2705.86 10.57 0.00 0.00 46248.19 6359.42 84662.80 00:15:36.229 =================================================================================================================== 00:15:36.229 Total : 2705.86 10.57 0.00 0.00 46248.19 6359.42 84662.80 00:15:36.229 0 00:15:36.229 16:11:37 -- target/tls.sh@234 -- # killprocess 3407037 00:15:36.229 16:11:37 -- common/autotest_common.sh@936 -- # '[' -z 3407037 ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@940 -- # kill -0 3407037 00:15:36.229 16:11:37 -- common/autotest_common.sh@941 -- # uname 00:15:36.229 16:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3407037 00:15:36.229 16:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:36.229 16:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3407037' 00:15:36.229 killing process with pid 3407037 00:15:36.229 16:11:37 -- common/autotest_common.sh@955 -- # kill 3407037 00:15:36.229 Received shutdown signal, test time was about 1.000000 seconds 00:15:36.229 00:15:36.229 Latency(us) 00:15:36.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.229 =================================================================================================================== 00:15:36.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.229 16:11:37 -- common/autotest_common.sh@960 -- # wait 3407037 00:15:36.229 16:11:37 -- target/tls.sh@235 -- # killprocess 3406866 00:15:36.229 16:11:37 -- common/autotest_common.sh@936 -- # '[' -z 3406866 ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@940 -- # kill -0 3406866 00:15:36.229 16:11:37 -- common/autotest_common.sh@941 -- # uname 00:15:36.229 16:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3406866 00:15:36.229 16:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:36.229 16:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:36.229 16:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3406866' 00:15:36.229 killing process with pid 3406866 00:15:36.229 16:11:37 -- common/autotest_common.sh@955 -- # kill 3406866 00:15:36.229 [2024-04-24 16:11:37.464029] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:36.229 16:11:37 -- common/autotest_common.sh@960 -- # wait 3406866 00:15:36.488 16:11:37 -- target/tls.sh@238 -- # nvmfappstart 00:15:36.488 16:11:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:36.488 16:11:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.488 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:36.488 16:11:37 -- nvmf/common.sh@470 -- # nvmfpid=3407434 00:15:36.488 16:11:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:36.488 16:11:37 -- nvmf/common.sh@471 -- # waitforlisten 3407434 00:15:36.488 16:11:37 -- common/autotest_common.sh@817 -- # '[' -z 3407434 ']' 00:15:36.488 16:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.488 16:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.488 16:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.488 16:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.488 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:36.747 [2024-04-24 16:11:37.807538] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:36.747 [2024-04-24 16:11:37.807631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.747 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.747 [2024-04-24 16:11:37.875817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.747 [2024-04-24 16:11:37.985156] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.747 [2024-04-24 16:11:37.985222] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.747 [2024-04-24 16:11:37.985247] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.747 [2024-04-24 16:11:37.985260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.747 [2024-04-24 16:11:37.985272] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.747 [2024-04-24 16:11:37.985307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.681 16:11:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.681 16:11:38 -- common/autotest_common.sh@850 -- # return 0 00:15:37.681 16:11:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:37.681 16:11:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.681 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:37.681 16:11:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.681 16:11:38 -- target/tls.sh@239 -- # rpc_cmd 00:15:37.681 16:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.681 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:37.681 [2024-04-24 16:11:38.812519] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.681 malloc0 00:15:37.681 [2024-04-24 16:11:38.844899] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.681 [2024-04-24 16:11:38.845213] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.681 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.681 16:11:38 -- target/tls.sh@252 -- # bdevperf_pid=3407589 00:15:37.681 16:11:38 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:37.681 16:11:38 -- target/tls.sh@254 -- # waitforlisten 3407589 /var/tmp/bdevperf.sock 00:15:37.681 16:11:38 -- common/autotest_common.sh@817 -- # '[' -z 3407589 ']' 00:15:37.681 16:11:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.681 16:11:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.681 16:11:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.681 16:11:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.681 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:37.681 [2024-04-24 16:11:38.914379] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:37.681 [2024-04-24 16:11:38.914442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407589 ] 00:15:37.681 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.975 [2024-04-24 16:11:38.975962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.975 [2024-04-24 16:11:39.085725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.975 16:11:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.975 16:11:39 -- common/autotest_common.sh@850 -- # return 0 00:15:37.975 16:11:39 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FeaQxlhqyw 00:15:38.252 16:11:39 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:38.509 [2024-04-24 16:11:39.751440] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.767 nvme0n1 00:15:38.767 16:11:39 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.767 Running I/O for 1 seconds... 00:15:39.699 00:15:39.699 Latency(us) 00:15:39.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.699 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.699 Verification LBA range: start 0x0 length 0x2000 00:15:39.699 nvme0n1 : 1.04 2863.82 11.19 0.00 0.00 43969.46 8155.59 86992.97 00:15:39.699 =================================================================================================================== 00:15:39.699 Total : 2863.82 11.19 0.00 0.00 43969.46 8155.59 86992.97 00:15:39.699 0 00:15:39.957 16:11:40 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:39.957 16:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.957 16:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:39.957 16:11:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.957 16:11:41 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:39.957 "subsystems": [ 00:15:39.957 { 00:15:39.957 "subsystem": "keyring", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "keyring_file_add_key", 00:15:39.957 "params": { 00:15:39.957 "name": "key0", 00:15:39.957 "path": "/tmp/tmp.FeaQxlhqyw" 00:15:39.957 } 00:15:39.957 } 00:15:39.957 ] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "iobuf", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "iobuf_set_options", 00:15:39.957 "params": { 00:15:39.957 "small_pool_count": 8192, 00:15:39.957 "large_pool_count": 1024, 00:15:39.957 "small_bufsize": 8192, 00:15:39.957 "large_bufsize": 135168 00:15:39.957 } 00:15:39.957 } 00:15:39.957 ] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "sock", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "sock_impl_set_options", 00:15:39.957 "params": { 00:15:39.957 "impl_name": "posix", 00:15:39.957 "recv_buf_size": 2097152, 00:15:39.957 "send_buf_size": 2097152, 00:15:39.957 "enable_recv_pipe": true, 00:15:39.957 "enable_quickack": false, 00:15:39.957 "enable_placement_id": 0, 00:15:39.957 "enable_zerocopy_send_server": true, 00:15:39.957 "enable_zerocopy_send_client": false, 00:15:39.957 "zerocopy_threshold": 0, 00:15:39.957 "tls_version": 0, 00:15:39.957 "enable_ktls": false 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "sock_impl_set_options", 00:15:39.957 "params": { 00:15:39.957 "impl_name": "ssl", 00:15:39.957 "recv_buf_size": 4096, 00:15:39.957 "send_buf_size": 4096, 00:15:39.957 "enable_recv_pipe": true, 00:15:39.957 "enable_quickack": false, 00:15:39.957 "enable_placement_id": 0, 00:15:39.957 "enable_zerocopy_send_server": true, 00:15:39.957 "enable_zerocopy_send_client": false, 00:15:39.957 "zerocopy_threshold": 0, 00:15:39.957 "tls_version": 0, 00:15:39.957 "enable_ktls": false 00:15:39.957 } 00:15:39.957 } 00:15:39.957 ] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "vmd", 00:15:39.957 "config": [] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "accel", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "accel_set_options", 00:15:39.957 "params": { 00:15:39.957 "small_cache_size": 128, 00:15:39.957 "large_cache_size": 16, 00:15:39.957 "task_count": 2048, 00:15:39.957 "sequence_count": 2048, 00:15:39.957 "buf_count": 2048 00:15:39.957 } 00:15:39.957 } 00:15:39.957 ] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "bdev", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "bdev_set_options", 00:15:39.957 "params": { 00:15:39.957 "bdev_io_pool_size": 65535, 00:15:39.957 "bdev_io_cache_size": 256, 00:15:39.957 "bdev_auto_examine": true, 00:15:39.957 "iobuf_small_cache_size": 128, 00:15:39.957 "iobuf_large_cache_size": 16 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_raid_set_options", 00:15:39.957 "params": { 00:15:39.957 "process_window_size_kb": 1024 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_iscsi_set_options", 00:15:39.957 "params": { 00:15:39.957 "timeout_sec": 30 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_nvme_set_options", 00:15:39.957 "params": { 00:15:39.957 "action_on_timeout": "none", 00:15:39.957 "timeout_us": 0, 00:15:39.957 "timeout_admin_us": 0, 00:15:39.957 "keep_alive_timeout_ms": 10000, 00:15:39.957 "arbitration_burst": 0, 00:15:39.957 "low_priority_weight": 0, 00:15:39.957 "medium_priority_weight": 0, 00:15:39.957 "high_priority_weight": 0, 00:15:39.957 "nvme_adminq_poll_period_us": 10000, 00:15:39.957 "nvme_ioq_poll_period_us": 0, 00:15:39.957 "io_queue_requests": 0, 00:15:39.957 "delay_cmd_submit": true, 00:15:39.957 "transport_retry_count": 4, 00:15:39.957 "bdev_retry_count": 3, 00:15:39.957 "transport_ack_timeout": 0, 00:15:39.957 "ctrlr_loss_timeout_sec": 0, 00:15:39.957 "reconnect_delay_sec": 0, 00:15:39.957 "fast_io_fail_timeout_sec": 0, 00:15:39.957 "disable_auto_failback": false, 00:15:39.957 "generate_uuids": false, 00:15:39.957 "transport_tos": 0, 00:15:39.957 "nvme_error_stat": false, 00:15:39.957 "rdma_srq_size": 0, 00:15:39.957 "io_path_stat": false, 00:15:39.957 "allow_accel_sequence": false, 00:15:39.957 "rdma_max_cq_size": 0, 00:15:39.957 "rdma_cm_event_timeout_ms": 0, 00:15:39.957 "dhchap_digests": [ 00:15:39.957 "sha256", 00:15:39.957 "sha384", 00:15:39.957 "sha512" 00:15:39.957 ], 00:15:39.957 "dhchap_dhgroups": [ 00:15:39.957 "null", 00:15:39.957 "ffdhe2048", 00:15:39.957 "ffdhe3072", 00:15:39.957 "ffdhe4096", 00:15:39.957 "ffdhe6144", 00:15:39.957 "ffdhe8192" 00:15:39.957 ] 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_nvme_set_hotplug", 00:15:39.957 "params": { 00:15:39.957 "period_us": 100000, 00:15:39.957 "enable": false 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_malloc_create", 00:15:39.957 "params": { 00:15:39.957 "name": "malloc0", 00:15:39.957 "num_blocks": 8192, 00:15:39.957 "block_size": 4096, 00:15:39.957 "physical_block_size": 4096, 00:15:39.957 "uuid": "7cdf4780-cb7b-4815-bd59-a47d3d69f295", 00:15:39.957 "optimal_io_boundary": 0 00:15:39.957 } 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "method": "bdev_wait_for_examine" 00:15:39.957 } 00:15:39.957 ] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "nbd", 00:15:39.957 "config": [] 00:15:39.957 }, 00:15:39.957 { 00:15:39.957 "subsystem": "scheduler", 00:15:39.957 "config": [ 00:15:39.957 { 00:15:39.957 "method": "framework_set_scheduler", 00:15:39.957 "params": { 00:15:39.957 "name": "static" 00:15:39.957 } 00:15:39.958 } 00:15:39.958 ] 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "subsystem": "nvmf", 00:15:39.958 "config": [ 00:15:39.958 { 00:15:39.958 "method": "nvmf_set_config", 00:15:39.958 "params": { 00:15:39.958 "discovery_filter": "match_any", 00:15:39.958 "admin_cmd_passthru": { 00:15:39.958 "identify_ctrlr": false 00:15:39.958 } 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_set_max_subsystems", 00:15:39.958 "params": { 00:15:39.958 "max_subsystems": 1024 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_set_crdt", 00:15:39.958 "params": { 00:15:39.958 "crdt1": 0, 00:15:39.958 "crdt2": 0, 00:15:39.958 "crdt3": 0 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_create_transport", 00:15:39.958 "params": { 00:15:39.958 "trtype": "TCP", 00:15:39.958 "max_queue_depth": 128, 00:15:39.958 "max_io_qpairs_per_ctrlr": 127, 00:15:39.958 "in_capsule_data_size": 4096, 00:15:39.958 "max_io_size": 131072, 00:15:39.958 "io_unit_size": 131072, 00:15:39.958 "max_aq_depth": 128, 00:15:39.958 "num_shared_buffers": 511, 00:15:39.958 "buf_cache_size": 4294967295, 00:15:39.958 "dif_insert_or_strip": false, 00:15:39.958 "zcopy": false, 00:15:39.958 "c2h_success": false, 00:15:39.958 "sock_priority": 0, 00:15:39.958 "abort_timeout_sec": 1, 00:15:39.958 "ack_timeout": 0, 00:15:39.958 "data_wr_pool_size": 0 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_create_subsystem", 00:15:39.958 "params": { 00:15:39.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.958 "allow_any_host": false, 00:15:39.958 "serial_number": "00000000000000000000", 00:15:39.958 "model_number": "SPDK bdev Controller", 00:15:39.958 "max_namespaces": 32, 00:15:39.958 "min_cntlid": 1, 00:15:39.958 "max_cntlid": 65519, 00:15:39.958 "ana_reporting": false 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_subsystem_add_host", 00:15:39.958 "params": { 00:15:39.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.958 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.958 "psk": "key0" 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_subsystem_add_ns", 00:15:39.958 "params": { 00:15:39.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.958 "namespace": { 00:15:39.958 "nsid": 1, 00:15:39.958 "bdev_name": "malloc0", 00:15:39.958 "nguid": "7CDF4780CB7B4815BD59A47D3D69F295", 00:15:39.958 "uuid": "7cdf4780-cb7b-4815-bd59-a47d3d69f295", 00:15:39.958 "no_auto_visible": false 00:15:39.958 } 00:15:39.958 } 00:15:39.958 }, 00:15:39.958 { 00:15:39.958 "method": "nvmf_subsystem_add_listener", 00:15:39.958 "params": { 00:15:39.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.958 "listen_address": { 00:15:39.958 "trtype": "TCP", 00:15:39.958 "adrfam": "IPv4", 00:15:39.958 "traddr": "10.0.0.2", 00:15:39.958 "trsvcid": "4420" 00:15:39.958 }, 00:15:39.958 "secure_channel": true 00:15:39.958 } 00:15:39.958 } 00:15:39.958 ] 00:15:39.958 } 00:15:39.958 ] 00:15:39.958 }' 00:15:39.958 16:11:41 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:40.216 16:11:41 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:40.216 "subsystems": [ 00:15:40.216 { 00:15:40.216 "subsystem": "keyring", 00:15:40.216 "config": [ 00:15:40.216 { 00:15:40.216 "method": "keyring_file_add_key", 00:15:40.216 "params": { 00:15:40.216 "name": "key0", 00:15:40.216 "path": "/tmp/tmp.FeaQxlhqyw" 00:15:40.216 } 00:15:40.216 } 00:15:40.217 ] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "iobuf", 00:15:40.217 "config": [ 00:15:40.217 { 00:15:40.217 "method": "iobuf_set_options", 00:15:40.217 "params": { 00:15:40.217 "small_pool_count": 8192, 00:15:40.217 "large_pool_count": 1024, 00:15:40.217 "small_bufsize": 8192, 00:15:40.217 "large_bufsize": 135168 00:15:40.217 } 00:15:40.217 } 00:15:40.217 ] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "sock", 00:15:40.217 "config": [ 00:15:40.217 { 00:15:40.217 "method": "sock_impl_set_options", 00:15:40.217 "params": { 00:15:40.217 "impl_name": "posix", 00:15:40.217 "recv_buf_size": 2097152, 00:15:40.217 "send_buf_size": 2097152, 00:15:40.217 "enable_recv_pipe": true, 00:15:40.217 "enable_quickack": false, 00:15:40.217 "enable_placement_id": 0, 00:15:40.217 "enable_zerocopy_send_server": true, 00:15:40.217 "enable_zerocopy_send_client": false, 00:15:40.217 "zerocopy_threshold": 0, 00:15:40.217 "tls_version": 0, 00:15:40.217 "enable_ktls": false 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "sock_impl_set_options", 00:15:40.217 "params": { 00:15:40.217 "impl_name": "ssl", 00:15:40.217 "recv_buf_size": 4096, 00:15:40.217 "send_buf_size": 4096, 00:15:40.217 "enable_recv_pipe": true, 00:15:40.217 "enable_quickack": false, 00:15:40.217 "enable_placement_id": 0, 00:15:40.217 "enable_zerocopy_send_server": true, 00:15:40.217 "enable_zerocopy_send_client": false, 00:15:40.217 "zerocopy_threshold": 0, 00:15:40.217 "tls_version": 0, 00:15:40.217 "enable_ktls": false 00:15:40.217 } 00:15:40.217 } 00:15:40.217 ] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "vmd", 00:15:40.217 "config": [] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "accel", 00:15:40.217 "config": [ 00:15:40.217 { 00:15:40.217 "method": "accel_set_options", 00:15:40.217 "params": { 00:15:40.217 "small_cache_size": 128, 00:15:40.217 "large_cache_size": 16, 00:15:40.217 "task_count": 2048, 00:15:40.217 "sequence_count": 2048, 00:15:40.217 "buf_count": 2048 00:15:40.217 } 00:15:40.217 } 00:15:40.217 ] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "bdev", 00:15:40.217 "config": [ 00:15:40.217 { 00:15:40.217 "method": "bdev_set_options", 00:15:40.217 "params": { 00:15:40.217 "bdev_io_pool_size": 65535, 00:15:40.217 "bdev_io_cache_size": 256, 00:15:40.217 "bdev_auto_examine": true, 00:15:40.217 "iobuf_small_cache_size": 128, 00:15:40.217 "iobuf_large_cache_size": 16 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_raid_set_options", 00:15:40.217 "params": { 00:15:40.217 "process_window_size_kb": 1024 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_iscsi_set_options", 00:15:40.217 "params": { 00:15:40.217 "timeout_sec": 30 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_nvme_set_options", 00:15:40.217 "params": { 00:15:40.217 "action_on_timeout": "none", 00:15:40.217 "timeout_us": 0, 00:15:40.217 "timeout_admin_us": 0, 00:15:40.217 "keep_alive_timeout_ms": 10000, 00:15:40.217 "arbitration_burst": 0, 00:15:40.217 "low_priority_weight": 0, 00:15:40.217 "medium_priority_weight": 0, 00:15:40.217 "high_priority_weight": 0, 00:15:40.217 "nvme_adminq_poll_period_us": 10000, 00:15:40.217 "nvme_ioq_poll_period_us": 0, 00:15:40.217 "io_queue_requests": 512, 00:15:40.217 "delay_cmd_submit": true, 00:15:40.217 "transport_retry_count": 4, 00:15:40.217 "bdev_retry_count": 3, 00:15:40.217 "transport_ack_timeout": 0, 00:15:40.217 "ctrlr_loss_timeout_sec": 0, 00:15:40.217 "reconnect_delay_sec": 0, 00:15:40.217 "fast_io_fail_timeout_sec": 0, 00:15:40.217 "disable_auto_failback": false, 00:15:40.217 "generate_uuids": false, 00:15:40.217 "transport_tos": 0, 00:15:40.217 "nvme_error_stat": false, 00:15:40.217 "rdma_srq_size": 0, 00:15:40.217 "io_path_stat": false, 00:15:40.217 "allow_accel_sequence": false, 00:15:40.217 "rdma_max_cq_size": 0, 00:15:40.217 "rdma_cm_event_timeout_ms": 0, 00:15:40.217 "dhchap_digests": [ 00:15:40.217 "sha256", 00:15:40.217 "sha384", 00:15:40.217 "sha512" 00:15:40.217 ], 00:15:40.217 "dhchap_dhgroups": [ 00:15:40.217 "null", 00:15:40.217 "ffdhe2048", 00:15:40.217 "ffdhe3072", 00:15:40.217 "ffdhe4096", 00:15:40.217 "ffdhe6144", 00:15:40.217 "ffdhe8192" 00:15:40.217 ] 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_nvme_attach_controller", 00:15:40.217 "params": { 00:15:40.217 "name": "nvme0", 00:15:40.217 "trtype": "TCP", 00:15:40.217 "adrfam": "IPv4", 00:15:40.217 "traddr": "10.0.0.2", 00:15:40.217 "trsvcid": "4420", 00:15:40.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.217 "prchk_reftag": false, 00:15:40.217 "prchk_guard": false, 00:15:40.217 "ctrlr_loss_timeout_sec": 0, 00:15:40.217 "reconnect_delay_sec": 0, 00:15:40.217 "fast_io_fail_timeout_sec": 0, 00:15:40.217 "psk": "key0", 00:15:40.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.217 "hdgst": false, 00:15:40.217 "ddgst": false 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_nvme_set_hotplug", 00:15:40.217 "params": { 00:15:40.217 "period_us": 100000, 00:15:40.217 "enable": false 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_enable_histogram", 00:15:40.217 "params": { 00:15:40.217 "name": "nvme0n1", 00:15:40.217 "enable": true 00:15:40.217 } 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "method": "bdev_wait_for_examine" 00:15:40.217 } 00:15:40.217 ] 00:15:40.217 }, 00:15:40.217 { 00:15:40.217 "subsystem": "nbd", 00:15:40.217 "config": [] 00:15:40.217 } 00:15:40.217 ] 00:15:40.217 }' 00:15:40.217 16:11:41 -- target/tls.sh@266 -- # killprocess 3407589 00:15:40.217 16:11:41 -- common/autotest_common.sh@936 -- # '[' -z 3407589 ']' 00:15:40.217 16:11:41 -- common/autotest_common.sh@940 -- # kill -0 3407589 00:15:40.217 16:11:41 -- common/autotest_common.sh@941 -- # uname 00:15:40.217 16:11:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.217 16:11:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3407589 00:15:40.217 16:11:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:40.217 16:11:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:40.217 16:11:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3407589' 00:15:40.217 killing process with pid 3407589 00:15:40.217 16:11:41 -- common/autotest_common.sh@955 -- # kill 3407589 00:15:40.217 Received shutdown signal, test time was about 1.000000 seconds 00:15:40.217 00:15:40.217 Latency(us) 00:15:40.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.217 =================================================================================================================== 00:15:40.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:40.217 16:11:41 -- common/autotest_common.sh@960 -- # wait 3407589 00:15:40.475 16:11:41 -- target/tls.sh@267 -- # killprocess 3407434 00:15:40.475 16:11:41 -- common/autotest_common.sh@936 -- # '[' -z 3407434 ']' 00:15:40.475 16:11:41 -- common/autotest_common.sh@940 -- # kill -0 3407434 00:15:40.475 16:11:41 -- common/autotest_common.sh@941 -- # uname 00:15:40.475 16:11:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.475 16:11:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3407434 00:15:40.475 16:11:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.475 16:11:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.475 16:11:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3407434' 00:15:40.475 killing process with pid 3407434 00:15:40.475 16:11:41 -- common/autotest_common.sh@955 -- # kill 3407434 00:15:40.475 16:11:41 -- common/autotest_common.sh@960 -- # wait 3407434 00:15:41.043 16:11:42 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:41.043 16:11:42 -- target/tls.sh@269 -- # echo '{ 00:15:41.043 "subsystems": [ 00:15:41.043 { 00:15:41.043 "subsystem": "keyring", 00:15:41.043 "config": [ 00:15:41.043 { 00:15:41.043 "method": "keyring_file_add_key", 00:15:41.043 "params": { 00:15:41.043 "name": "key0", 00:15:41.043 "path": "/tmp/tmp.FeaQxlhqyw" 00:15:41.043 } 00:15:41.043 } 00:15:41.043 ] 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "subsystem": "iobuf", 00:15:41.043 "config": [ 00:15:41.043 { 00:15:41.043 "method": "iobuf_set_options", 00:15:41.043 "params": { 00:15:41.043 "small_pool_count": 8192, 00:15:41.043 "large_pool_count": 1024, 00:15:41.043 "small_bufsize": 8192, 00:15:41.043 "large_bufsize": 135168 00:15:41.043 } 00:15:41.043 } 00:15:41.043 ] 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "subsystem": "sock", 00:15:41.043 "config": [ 00:15:41.043 { 00:15:41.043 "method": "sock_impl_set_options", 00:15:41.043 "params": { 00:15:41.043 "impl_name": "posix", 00:15:41.043 "recv_buf_size": 2097152, 00:15:41.043 "send_buf_size": 2097152, 00:15:41.043 "enable_recv_pipe": true, 00:15:41.043 "enable_quickack": false, 00:15:41.043 "enable_placement_id": 0, 00:15:41.043 "enable_zerocopy_send_server": true, 00:15:41.043 "enable_zerocopy_send_client": false, 00:15:41.043 "zerocopy_threshold": 0, 00:15:41.043 "tls_version": 0, 00:15:41.043 "enable_ktls": false 00:15:41.043 } 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "method": "sock_impl_set_options", 00:15:41.043 "params": { 00:15:41.043 "impl_name": "ssl", 00:15:41.043 "recv_buf_size": 4096, 00:15:41.043 "send_buf_size": 4096, 00:15:41.043 "enable_recv_pipe": true, 00:15:41.043 "enable_quickack": false, 00:15:41.043 "enable_placement_id": 0, 00:15:41.043 "enable_zerocopy_send_server": true, 00:15:41.043 "enable_zerocopy_send_client": false, 00:15:41.043 "zerocopy_threshold": 0, 00:15:41.043 "tls_version": 0, 00:15:41.043 "enable_ktls": false 00:15:41.043 } 00:15:41.043 } 00:15:41.043 ] 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "subsystem": "vmd", 00:15:41.043 "config": [] 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "subsystem": "accel", 00:15:41.043 "config": [ 00:15:41.043 { 00:15:41.043 "method": "accel_set_options", 00:15:41.043 "params": { 00:15:41.043 "small_cache_size": 128, 00:15:41.043 "large_cache_size": 16, 00:15:41.043 "task_count": 2048, 00:15:41.043 "sequence_count": 2048, 00:15:41.043 "buf_count": 2048 00:15:41.043 } 00:15:41.043 } 00:15:41.043 ] 00:15:41.043 }, 00:15:41.043 { 00:15:41.043 "subsystem": "bdev", 00:15:41.043 "config": [ 00:15:41.043 { 00:15:41.043 "method": "bdev_set_options", 00:15:41.043 "params": { 00:15:41.044 "bdev_io_pool_size": 65535, 00:15:41.044 "bdev_io_cache_size": 256, 00:15:41.044 "bdev_auto_examine": true, 00:15:41.044 "iobuf_small_cache_size": 128, 00:15:41.044 "iobuf_large_cache_size": 16 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_raid_set_options", 00:15:41.044 "params": { 00:15:41.044 "process_window_size_kb": 1024 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_iscsi_set_options", 00:15:41.044 "params": { 00:15:41.044 "timeout_sec": 30 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_nvme_set_options", 00:15:41.044 "params": { 00:15:41.044 "action_on_timeout": "none", 00:15:41.044 "timeout_us": 0, 00:15:41.044 "timeout_admin_us": 0, 00:15:41.044 "keep_alive_timeout_ms": 10000, 00:15:41.044 "arbitration_burst": 0, 00:15:41.044 "low_priority_weight": 0, 00:15:41.044 "medium_priority_weight": 0, 00:15:41.044 "high_priority_weight": 0, 00:15:41.044 "nvme_adminq_poll_period_us": 10000, 00:15:41.044 "nvme_ioq_poll_period_us": 0, 00:15:41.044 "io_queue_requests": 0, 00:15:41.044 "delay_cmd_submit": true, 00:15:41.044 "transport_retry_count": 4, 00:15:41.044 "bdev_retry_count": 3, 00:15:41.044 "transport_ack_timeout": 0, 00:15:41.044 "ctrlr_loss_timeout_sec": 0, 00:15:41.044 "reconnect_delay_sec": 0, 00:15:41.044 "fast_io_fail_timeout_sec": 0, 00:15:41.044 "disable_auto_failback": false, 00:15:41.044 "generate_uuids": false, 00:15:41.044 "transport_tos": 0, 00:15:41.044 "nvme_error_stat": false, 00:15:41.044 "rdma_srq_size": 0, 00:15:41.044 "io_path_stat": false, 00:15:41.044 "allow_accel_sequence": false, 00:15:41.044 "rdma_max_cq_size": 0, 00:15:41.044 "rdma_cm_event_timeout_ms": 0, 00:15:41.044 "dhchap_digests": [ 00:15:41.044 "sha256", 00:15:41.044 "sha384", 00:15:41.044 "sha512" 00:15:41.044 ], 00:15:41.044 "dhchap_dhgroups": [ 00:15:41.044 "null", 00:15:41.044 "ffdhe2048", 00:15:41.044 "ffdhe3072", 00:15:41.044 "ffdhe4096", 00:15:41.044 "ffdhe6144", 00:15:41.044 "ffdhe8192" 00:15:41.044 ] 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_nvme_set_hotplug", 00:15:41.044 "params": { 00:15:41.044 "period_us": 100000, 00:15:41.044 "enable": false 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_malloc_create", 00:15:41.044 "params": { 00:15:41.044 "name": "malloc0", 00:15:41.044 "num_blocks": 8192, 00:15:41.044 "block_size": 4096, 00:15:41.044 "physical_block_size": 4096, 00:15:41.044 "uuid": "7cdf4780-cb7b-4815-bd59-a47d3d69f295", 00:15:41.044 "optimal_io_boundary": 0 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "bdev_wait_for_examine" 00:15:41.044 } 00:15:41.044 ] 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "subsystem": "nbd", 00:15:41.044 "config": [] 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "subsystem": "scheduler", 00:15:41.044 "config": [ 00:15:41.044 { 00:15:41.044 "method": "framework_set_scheduler", 00:15:41.044 "params": { 00:15:41.044 "name": "static" 00:15:41.044 } 00:15:41.044 } 00:15:41.044 ] 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "subsystem": "nvmf", 00:15:41.044 "config": [ 00:15:41.044 { 00:15:41.044 "method": "nvmf_set_config", 00:15:41.044 "params": { 00:15:41.044 "discovery_filter": "match_any", 00:15:41.044 "admin_cmd_passthru": { 00:15:41.044 "identify_ctrlr": false 00:15:41.044 } 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_set_max_subsystems", 00:15:41.044 "params": { 00:15:41.044 "max_subsystems": 1024 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_set_crdt", 00:15:41.044 "params": { 00:15:41.044 "crdt1": 0, 00:15:41.044 "crdt2": 0, 00:15:41.044 "crdt3": 0 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_create_transport", 00:15:41.044 "params": { 00:15:41.044 "trtype": "TCP", 00:15:41.044 "max_queue_depth": 128, 00:15:41.044 "max_io_qpairs_per_ctrlr": 127, 00:15:41.044 "in_capsule_data_size": 4096, 00:15:41.044 "max_io_size": 131072, 00:15:41.044 "io_unit_size": 131072, 00:15:41.044 "max_aq_depth": 128, 00:15:41.044 "num_shared_buffers": 511, 00:15:41.044 "buf_cache_size": 4294967295, 00:15:41.044 "dif_insert_or_strip": false, 00:15:41.044 "zcopy": false, 00:15:41.044 "c2h_success": false, 00:15:41.044 "sock_priority": 0, 00:15:41.044 "abort_timeout_sec": 1, 00:15:41.044 "ack_timeout": 0, 00:15:41.044 "data_wr_pool_size": 0 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_create_subsystem", 00:15:41.044 "params": { 00:15:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.044 "allow_any_host": false, 00:15:41.044 "serial_number": "00000000000000000000", 00:15:41.044 "model_number": "SPDK bdev 16:11:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:41.044 Controller", 00:15:41.044 "max_namespaces": 32, 00:15:41.044 "min_cntlid": 1, 00:15:41.044 "max_cntlid": 65519, 00:15:41.044 "ana_reporting": false 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_subsystem_add_host", 00:15:41.044 "params": { 00:15:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.044 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.044 "psk": "key0" 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_subsystem_add_ns", 00:15:41.044 "params": { 00:15:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.044 "namespace": { 00:15:41.044 "nsid": 1, 00:15:41.044 "bdev_name": "malloc0", 00:15:41.044 "nguid": "7CDF4780CB7B4815BD59A47D3D69F295", 00:15:41.044 "uuid": "7cdf4780-cb7b-4815-bd59-a47d3d69f295", 00:15:41.044 "no_auto_visible": false 00:15:41.044 } 00:15:41.044 } 00:15:41.044 }, 00:15:41.044 { 00:15:41.044 "method": "nvmf_subsystem_add_listener", 00:15:41.044 "params": { 00:15:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.044 "listen_address": { 00:15:41.044 "trtype": "TCP", 00:15:41.044 "adrfam": "IPv4", 00:15:41.044 "traddr": "10.0.0.2", 00:15:41.044 "trsvcid": "4420" 00:15:41.044 }, 00:15:41.044 "secure_channel": true 00:15:41.044 } 00:15:41.044 } 00:15:41.044 ] 00:15:41.044 } 00:15:41.044 ] 00:15:41.044 }' 00:15:41.044 16:11:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:41.044 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:41.044 16:11:42 -- nvmf/common.sh@470 -- # nvmfpid=3408002 00:15:41.044 16:11:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:41.044 16:11:42 -- nvmf/common.sh@471 -- # waitforlisten 3408002 00:15:41.044 16:11:42 -- common/autotest_common.sh@817 -- # '[' -z 3408002 ']' 00:15:41.044 16:11:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.044 16:11:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.044 16:11:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.044 16:11:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.044 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:41.044 [2024-04-24 16:11:42.073279] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:41.044 [2024-04-24 16:11:42.073367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.044 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.044 [2024-04-24 16:11:42.141041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.044 [2024-04-24 16:11:42.250072] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.044 [2024-04-24 16:11:42.250138] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.044 [2024-04-24 16:11:42.250166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.044 [2024-04-24 16:11:42.250181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.044 [2024-04-24 16:11:42.250194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.044 [2024-04-24 16:11:42.250294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.303 [2024-04-24 16:11:42.490490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.303 [2024-04-24 16:11:42.522486] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.303 [2024-04-24 16:11:42.529974] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.867 16:11:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.867 16:11:42 -- common/autotest_common.sh@850 -- # return 0 00:15:41.867 16:11:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:41.867 16:11:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:41.867 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:41.867 16:11:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.867 16:11:43 -- target/tls.sh@272 -- # bdevperf_pid=3408046 00:15:41.867 16:11:43 -- target/tls.sh@273 -- # waitforlisten 3408046 /var/tmp/bdevperf.sock 00:15:41.867 16:11:43 -- common/autotest_common.sh@817 -- # '[' -z 3408046 ']' 00:15:41.867 16:11:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.867 16:11:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.867 16:11:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.867 16:11:43 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:41.867 16:11:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.867 16:11:43 -- target/tls.sh@270 -- # echo '{ 00:15:41.867 "subsystems": [ 00:15:41.867 { 00:15:41.867 "subsystem": "keyring", 00:15:41.867 "config": [ 00:15:41.867 { 00:15:41.867 "method": "keyring_file_add_key", 00:15:41.867 "params": { 00:15:41.867 "name": "key0", 00:15:41.867 "path": "/tmp/tmp.FeaQxlhqyw" 00:15:41.867 } 00:15:41.867 } 00:15:41.867 ] 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "subsystem": "iobuf", 00:15:41.867 "config": [ 00:15:41.867 { 00:15:41.867 "method": "iobuf_set_options", 00:15:41.867 "params": { 00:15:41.867 "small_pool_count": 8192, 00:15:41.867 "large_pool_count": 1024, 00:15:41.867 "small_bufsize": 8192, 00:15:41.867 "large_bufsize": 135168 00:15:41.867 } 00:15:41.867 } 00:15:41.867 ] 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "subsystem": "sock", 00:15:41.867 "config": [ 00:15:41.867 { 00:15:41.867 "method": "sock_impl_set_options", 00:15:41.867 "params": { 00:15:41.867 "impl_name": "posix", 00:15:41.867 "recv_buf_size": 2097152, 00:15:41.867 "send_buf_size": 2097152, 00:15:41.867 "enable_recv_pipe": true, 00:15:41.867 "enable_quickack": false, 00:15:41.867 "enable_placement_id": 0, 00:15:41.867 "enable_zerocopy_send_server": true, 00:15:41.867 "enable_zerocopy_send_client": false, 00:15:41.867 "zerocopy_threshold": 0, 00:15:41.867 "tls_version": 0, 00:15:41.867 "enable_ktls": false 00:15:41.867 } 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "method": "sock_impl_set_options", 00:15:41.867 "params": { 00:15:41.867 "impl_name": "ssl", 00:15:41.867 "recv_buf_size": 4096, 00:15:41.867 "send_buf_size": 4096, 00:15:41.867 "enable_recv_pipe": true, 00:15:41.867 "enable_quickack": false, 00:15:41.867 "enable_placement_id": 0, 00:15:41.867 "enable_zerocopy_send_server": true, 00:15:41.867 "enable_zerocopy_send_client": false, 00:15:41.867 "zerocopy_threshold": 0, 00:15:41.867 "tls_version": 0, 00:15:41.867 "enable_ktls": false 00:15:41.867 } 00:15:41.867 } 00:15:41.867 ] 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "subsystem": "vmd", 00:15:41.867 "config": [] 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "subsystem": "accel", 00:15:41.867 "config": [ 00:15:41.867 { 00:15:41.867 "method": "accel_set_options", 00:15:41.867 "params": { 00:15:41.867 "small_cache_size": 128, 00:15:41.867 "large_cache_size": 16, 00:15:41.867 "task_count": 2048, 00:15:41.867 "sequence_count": 2048, 00:15:41.867 "buf_count": 2048 00:15:41.867 } 00:15:41.867 } 00:15:41.867 ] 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "subsystem": "bdev", 00:15:41.867 "config": [ 00:15:41.867 { 00:15:41.867 "method": "bdev_set_options", 00:15:41.867 "params": { 00:15:41.867 "bdev_io_pool_size": 65535, 00:15:41.867 "bdev_io_cache_size": 256, 00:15:41.867 "bdev_auto_examine": true, 00:15:41.867 "iobuf_small_cache_size": 128, 00:15:41.867 "iobuf_large_cache_size": 16 00:15:41.867 } 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "method": "bdev_raid_set_options", 00:15:41.867 "params": { 00:15:41.867 "process_window_size_kb": 1024 00:15:41.867 } 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "method": "bdev_iscsi_set_options", 00:15:41.867 "params": { 00:15:41.867 "timeout_sec": 30 00:15:41.867 } 00:15:41.867 }, 00:15:41.867 { 00:15:41.867 "method": "bdev_nvme_set_options", 00:15:41.867 "params": { 00:15:41.867 "action_on_timeout": "none", 00:15:41.867 "timeout_us": 0, 00:15:41.867 "timeout_admin_us": 0, 00:15:41.867 "keep_alive_timeout_ms": 10000, 00:15:41.867 "arbitration_burst": 0, 00:15:41.867 "low_priority_weight": 0, 00:15:41.867 "medium_priority_weight": 0, 00:15:41.867 "high_priority_weight": 0, 00:15:41.867 "nvme_adminq_poll_period_us": 10000, 00:15:41.867 "nvme_ioq_poll_period_us": 0, 00:15:41.867 "io_queue_requests": 512, 00:15:41.867 "delay_cmd_submit": true, 00:15:41.867 "transport_retry_count": 4, 00:15:41.867 "bdev_retry_count": 3, 00:15:41.867 "transport_ack_timeout": 0, 00:15:41.867 "ctrlr_loss_timeout_sec": 0, 00:15:41.867 "reconnect_delay_sec": 0, 00:15:41.867 "fast_io_fail_timeout_sec": 0, 00:15:41.867 "disable_auto_failback": false, 00:15:41.867 "generate_uuids": false, 00:15:41.867 "transport_tos": 0, 00:15:41.867 "nvme_error_stat": false, 00:15:41.867 "rdma_srq_size": 0, 00:15:41.867 "io_path_stat": false, 00:15:41.867 "allow_accel_sequence": false, 00:15:41.867 "rdma_max_cq_size": 0, 00:15:41.867 "rdma_cm_event_timeout_ms": 0, 00:15:41.867 "dhchap_digests": [ 00:15:41.867 "sha256", 00:15:41.867 "sha384", 00:15:41.867 "sha512" 00:15:41.867 ], 00:15:41.868 "dhchap_dhgroups": [ 00:15:41.868 "null", 00:15:41.868 "ffdhe2048", 00:15:41.868 "ffdhe3072", 00:15:41.868 "ffdhe4096", 00:15:41.868 "ffdhe6144", 00:15:41.868 "ffdhe8192" 00:15:41.868 ] 00:15:41.868 } 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "method": "bdev_nvme_attach_controller", 00:15:41.868 "params": { 00:15:41.868 "name": "nvme0", 00:15:41.868 "trtype": "TCP", 00:15:41.868 "adrfam": "IPv4", 00:15:41.868 "traddr": "10.0.0.2", 00:15:41.868 "trsvcid": "4420", 00:15:41.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.868 "prchk_reftag": false, 00:15:41.868 "prchk_guard": false, 00:15:41.868 "ctrlr_loss_timeout_sec": 0, 00:15:41.868 "reconnect_delay_sec": 0, 00:15:41.868 "fast_io_fail_timeout_sec": 0, 00:15:41.868 "psk": "key0", 00:15:41.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.868 "hdgst": false, 00:15:41.868 "ddgst": false 00:15:41.868 } 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "method": "bdev_nvme_set_hotplug", 00:15:41.868 "params": { 00:15:41.868 "period_us": 100000, 00:15:41.868 "enable": false 00:15:41.868 } 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "method": "bdev_enable_histogram", 00:15:41.868 "params": { 00:15:41.868 "name": "nvme0n1", 00:15:41.868 "enable": true 00:15:41.868 } 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "method": "bdev_wait_for_examine" 00:15:41.868 } 00:15:41.868 ] 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "subsystem": "nbd", 00:15:41.868 "config": [] 00:15:41.868 } 00:15:41.868 ] 00:15:41.868 }' 00:15:41.868 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:41.868 [2024-04-24 16:11:43.065672] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:41.868 [2024-04-24 16:11:43.065785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408046 ] 00:15:41.868 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.868 [2024-04-24 16:11:43.129091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.125 [2024-04-24 16:11:43.232877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.125 [2024-04-24 16:11:43.408469] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:43.058 16:11:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:43.058 16:11:44 -- common/autotest_common.sh@850 -- # return 0 00:15:43.058 16:11:44 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:43.058 16:11:44 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:43.058 16:11:44 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.058 16:11:44 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.317 Running I/O for 1 seconds... 00:15:44.250 00:15:44.250 Latency(us) 00:15:44.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.250 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:44.250 Verification LBA range: start 0x0 length 0x2000 00:15:44.250 nvme0n1 : 1.05 2283.30 8.92 0.00 0.00 54847.39 6844.87 75730.49 00:15:44.250 =================================================================================================================== 00:15:44.250 Total : 2283.30 8.92 0.00 0.00 54847.39 6844.87 75730.49 00:15:44.250 0 00:15:44.250 16:11:45 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:44.250 16:11:45 -- target/tls.sh@279 -- # cleanup 00:15:44.250 16:11:45 -- target/tls.sh@15 -- # process_shm --id 0 00:15:44.250 16:11:45 -- common/autotest_common.sh@794 -- # type=--id 00:15:44.250 16:11:45 -- common/autotest_common.sh@795 -- # id=0 00:15:44.250 16:11:45 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:44.250 16:11:45 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:44.250 16:11:45 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:44.250 16:11:45 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:44.250 16:11:45 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:44.250 16:11:45 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:44.250 nvmf_trace.0 00:15:44.509 16:11:45 -- common/autotest_common.sh@809 -- # return 0 00:15:44.509 16:11:45 -- target/tls.sh@16 -- # killprocess 3408046 00:15:44.509 16:11:45 -- common/autotest_common.sh@936 -- # '[' -z 3408046 ']' 00:15:44.509 16:11:45 -- common/autotest_common.sh@940 -- # kill -0 3408046 00:15:44.509 16:11:45 -- common/autotest_common.sh@941 -- # uname 00:15:44.509 16:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.509 16:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3408046 00:15:44.509 16:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:44.509 16:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:44.509 16:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3408046' 00:15:44.509 killing process with pid 3408046 00:15:44.509 16:11:45 -- common/autotest_common.sh@955 -- # kill 3408046 00:15:44.509 Received shutdown signal, test time was about 1.000000 seconds 00:15:44.509 00:15:44.509 Latency(us) 00:15:44.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.509 =================================================================================================================== 00:15:44.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.509 16:11:45 -- common/autotest_common.sh@960 -- # wait 3408046 00:15:44.767 16:11:45 -- target/tls.sh@17 -- # nvmftestfini 00:15:44.767 16:11:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:44.767 16:11:45 -- nvmf/common.sh@117 -- # sync 00:15:44.767 16:11:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.767 16:11:45 -- nvmf/common.sh@120 -- # set +e 00:15:44.767 16:11:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.767 16:11:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.767 rmmod nvme_tcp 00:15:44.767 rmmod nvme_fabrics 00:15:44.767 rmmod nvme_keyring 00:15:44.767 16:11:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.767 16:11:45 -- nvmf/common.sh@124 -- # set -e 00:15:44.767 16:11:45 -- nvmf/common.sh@125 -- # return 0 00:15:44.767 16:11:45 -- nvmf/common.sh@478 -- # '[' -n 3408002 ']' 00:15:44.767 16:11:45 -- nvmf/common.sh@479 -- # killprocess 3408002 00:15:44.767 16:11:45 -- common/autotest_common.sh@936 -- # '[' -z 3408002 ']' 00:15:44.767 16:11:45 -- common/autotest_common.sh@940 -- # kill -0 3408002 00:15:44.767 16:11:45 -- common/autotest_common.sh@941 -- # uname 00:15:44.767 16:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.767 16:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3408002 00:15:44.767 16:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.767 16:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.767 16:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3408002' 00:15:44.767 killing process with pid 3408002 00:15:44.767 16:11:45 -- common/autotest_common.sh@955 -- # kill 3408002 00:15:44.767 16:11:45 -- common/autotest_common.sh@960 -- # wait 3408002 00:15:45.026 16:11:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:45.026 16:11:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:45.026 16:11:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:45.026 16:11:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.026 16:11:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:45.026 16:11:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.026 16:11:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.026 16:11:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.555 16:11:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.555 16:11:48 -- target/tls.sh@18 -- # rm -f /tmp/tmp.RQVvMpA68A /tmp/tmp.O4wjryNhWG /tmp/tmp.FeaQxlhqyw 00:15:47.555 00:15:47.555 real 1m21.250s 00:15:47.555 user 2m3.558s 00:15:47.555 sys 0m26.751s 00:15:47.555 16:11:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.555 16:11:48 -- common/autotest_common.sh@10 -- # set +x 00:15:47.555 ************************************ 00:15:47.555 END TEST nvmf_tls 00:15:47.555 ************************************ 00:15:47.555 16:11:48 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.555 16:11:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.555 16:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.555 16:11:48 -- common/autotest_common.sh@10 -- # set +x 00:15:47.555 ************************************ 00:15:47.555 START TEST nvmf_fips 00:15:47.555 ************************************ 00:15:47.555 16:11:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.555 * Looking for test storage... 00:15:47.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:47.555 16:11:48 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.555 16:11:48 -- nvmf/common.sh@7 -- # uname -s 00:15:47.555 16:11:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.555 16:11:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.555 16:11:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.555 16:11:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.555 16:11:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.555 16:11:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.555 16:11:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.555 16:11:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.555 16:11:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.555 16:11:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.555 16:11:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.555 16:11:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.555 16:11:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.555 16:11:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.555 16:11:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.555 16:11:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.555 16:11:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.555 16:11:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.555 16:11:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.555 16:11:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.555 16:11:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.555 16:11:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.555 16:11:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.555 16:11:48 -- paths/export.sh@5 -- # export PATH 00:15:47.555 16:11:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.555 16:11:48 -- nvmf/common.sh@47 -- # : 0 00:15:47.555 16:11:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.555 16:11:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.555 16:11:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.555 16:11:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.555 16:11:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.555 16:11:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.555 16:11:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.555 16:11:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.555 16:11:48 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.555 16:11:48 -- fips/fips.sh@89 -- # check_openssl_version 00:15:47.555 16:11:48 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:47.555 16:11:48 -- fips/fips.sh@85 -- # openssl version 00:15:47.555 16:11:48 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:47.555 16:11:48 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:47.555 16:11:48 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:47.555 16:11:48 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:47.555 16:11:48 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:47.556 16:11:48 -- scripts/common.sh@333 -- # IFS=.-: 00:15:47.556 16:11:48 -- scripts/common.sh@333 -- # read -ra ver1 00:15:47.556 16:11:48 -- scripts/common.sh@334 -- # IFS=.-: 00:15:47.556 16:11:48 -- scripts/common.sh@334 -- # read -ra ver2 00:15:47.556 16:11:48 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:47.556 16:11:48 -- scripts/common.sh@337 -- # ver1_l=3 00:15:47.556 16:11:48 -- scripts/common.sh@338 -- # ver2_l=3 00:15:47.556 16:11:48 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:47.556 16:11:48 -- scripts/common.sh@341 -- # case "$op" in 00:15:47.556 16:11:48 -- scripts/common.sh@345 -- # : 1 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # decimal 3 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=3 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 3 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # decimal 3 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=3 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 3 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:47.556 16:11:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.556 16:11:48 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # decimal 0 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=0 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 0 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # decimal 0 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=0 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 0 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.556 16:11:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.556 16:11:48 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.556 16:11:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # decimal 9 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=9 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 9 00:15:47.556 16:11:48 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # decimal 0 00:15:47.556 16:11:48 -- scripts/common.sh@350 -- # local d=0 00:15:47.556 16:11:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.556 16:11:48 -- scripts/common.sh@352 -- # echo 0 00:15:47.556 16:11:48 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.556 16:11:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.556 16:11:48 -- scripts/common.sh@364 -- # return 0 00:15:47.556 16:11:48 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:47.556 16:11:48 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:47.556 16:11:48 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:47.556 16:11:48 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:47.556 16:11:48 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:47.556 16:11:48 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:47.556 16:11:48 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:47.556 16:11:48 -- fips/fips.sh@113 -- # build_openssl_config 00:15:47.556 16:11:48 -- fips/fips.sh@37 -- # cat 00:15:47.556 16:11:48 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:47.556 16:11:48 -- fips/fips.sh@58 -- # cat - 00:15:47.556 16:11:48 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:47.556 16:11:48 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:47.556 16:11:48 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:47.556 16:11:48 -- fips/fips.sh@116 -- # openssl list -providers 00:15:47.556 16:11:48 -- fips/fips.sh@116 -- # grep name 00:15:47.556 16:11:48 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:47.556 16:11:48 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:47.556 16:11:48 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:47.556 16:11:48 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:47.556 16:11:48 -- fips/fips.sh@127 -- # : 00:15:47.556 16:11:48 -- common/autotest_common.sh@638 -- # local es=0 00:15:47.556 16:11:48 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:47.556 16:11:48 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:47.556 16:11:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.556 16:11:48 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:47.556 16:11:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.556 16:11:48 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:47.556 16:11:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.556 16:11:48 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:47.556 16:11:48 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:47.556 16:11:48 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:47.556 Error setting digest 00:15:47.556 001292FE487F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:47.556 001292FE487F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:47.556 16:11:48 -- common/autotest_common.sh@641 -- # es=1 00:15:47.556 16:11:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:47.556 16:11:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:47.556 16:11:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:47.556 16:11:48 -- fips/fips.sh@130 -- # nvmftestinit 00:15:47.556 16:11:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:47.556 16:11:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.556 16:11:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:47.556 16:11:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:47.556 16:11:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:47.556 16:11:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.556 16:11:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.556 16:11:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.556 16:11:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:47.556 16:11:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:47.556 16:11:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.556 16:11:48 -- common/autotest_common.sh@10 -- # set +x 00:15:49.456 16:11:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:49.456 16:11:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.456 16:11:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.456 16:11:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.456 16:11:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.456 16:11:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.456 16:11:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.456 16:11:50 -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.456 16:11:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.456 16:11:50 -- nvmf/common.sh@296 -- # e810=() 00:15:49.456 16:11:50 -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.456 16:11:50 -- nvmf/common.sh@297 -- # x722=() 00:15:49.456 16:11:50 -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.456 16:11:50 -- nvmf/common.sh@298 -- # mlx=() 00:15:49.456 16:11:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.456 16:11:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.456 16:11:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.456 16:11:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:49.456 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:49.456 16:11:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.456 16:11:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:49.456 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:49.456 16:11:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.456 16:11:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.456 16:11:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.456 16:11:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:49.456 Found net devices under 0000:09:00.0: cvl_0_0 00:15:49.456 16:11:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.456 16:11:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.456 16:11:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.456 16:11:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:49.456 Found net devices under 0000:09:00.1: cvl_0_1 00:15:49.456 16:11:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:49.456 16:11:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:49.456 16:11:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.456 16:11:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.456 16:11:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.456 16:11:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.456 16:11:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.456 16:11:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.456 16:11:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.456 16:11:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.456 16:11:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.456 16:11:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.456 16:11:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.456 16:11:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.456 16:11:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.456 16:11:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.456 16:11:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.456 16:11:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.456 16:11:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.456 16:11:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.456 16:11:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:15:49.456 00:15:49.456 --- 10.0.0.2 ping statistics --- 00:15:49.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.456 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:15:49.456 16:11:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:15:49.456 00:15:49.456 --- 10.0.0.1 ping statistics --- 00:15:49.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.456 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:49.456 16:11:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.456 16:11:50 -- nvmf/common.sh@411 -- # return 0 00:15:49.456 16:11:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:49.456 16:11:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.456 16:11:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:49.456 16:11:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.456 16:11:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:49.456 16:11:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:49.456 16:11:50 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:49.456 16:11:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:49.456 16:11:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:49.456 16:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.456 16:11:50 -- nvmf/common.sh@470 -- # nvmfpid=3410402 00:15:49.456 16:11:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.456 16:11:50 -- nvmf/common.sh@471 -- # waitforlisten 3410402 00:15:49.456 16:11:50 -- common/autotest_common.sh@817 -- # '[' -z 3410402 ']' 00:15:49.456 16:11:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.456 16:11:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.456 16:11:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.456 16:11:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.456 16:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.456 [2024-04-24 16:11:50.700531] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:49.456 [2024-04-24 16:11:50.700603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.456 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.715 [2024-04-24 16:11:50.763980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.715 [2024-04-24 16:11:50.865942] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.715 [2024-04-24 16:11:50.865994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.715 [2024-04-24 16:11:50.866009] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.715 [2024-04-24 16:11:50.866036] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.715 [2024-04-24 16:11:50.866047] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.715 [2024-04-24 16:11:50.866102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.715 16:11:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.715 16:11:50 -- common/autotest_common.sh@850 -- # return 0 00:15:49.715 16:11:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:49.715 16:11:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.715 16:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.715 16:11:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.715 16:11:50 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:49.715 16:11:50 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.715 16:11:50 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:49.715 16:11:50 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.715 16:11:50 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:49.715 16:11:50 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:49.715 16:11:50 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:49.715 16:11:50 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.974 [2024-04-24 16:11:51.224467] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.974 [2024-04-24 16:11:51.240462] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:49.974 [2024-04-24 16:11:51.240704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.232 [2024-04-24 16:11:51.273140] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:50.232 malloc0 00:15:50.232 16:11:51 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.232 16:11:51 -- fips/fips.sh@147 -- # bdevperf_pid=3410550 00:15:50.232 16:11:51 -- fips/fips.sh@148 -- # waitforlisten 3410550 /var/tmp/bdevperf.sock 00:15:50.232 16:11:51 -- common/autotest_common.sh@817 -- # '[' -z 3410550 ']' 00:15:50.232 16:11:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.232 16:11:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.232 16:11:51 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:50.232 16:11:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.232 16:11:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.232 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.232 [2024-04-24 16:11:51.365249] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:15:50.232 [2024-04-24 16:11:51.365341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410550 ] 00:15:50.232 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.232 [2024-04-24 16:11:51.422858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.489 [2024-04-24 16:11:51.522990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.054 16:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.054 16:11:52 -- common/autotest_common.sh@850 -- # return 0 00:15:51.054 16:11:52 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:51.312 [2024-04-24 16:11:52.505333] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.312 [2024-04-24 16:11:52.505467] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:51.312 TLSTESTn1 00:15:51.569 16:11:52 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.569 Running I/O for 10 seconds... 00:16:01.727 00:16:01.727 Latency(us) 00:16:01.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.727 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:01.727 Verification LBA range: start 0x0 length 0x2000 00:16:01.727 TLSTESTn1 : 10.04 2861.55 11.18 0.00 0.00 44627.16 8641.04 64856.37 00:16:01.727 =================================================================================================================== 00:16:01.727 Total : 2861.55 11.18 0.00 0.00 44627.16 8641.04 64856.37 00:16:01.727 0 00:16:01.727 16:12:02 -- fips/fips.sh@1 -- # cleanup 00:16:01.727 16:12:02 -- fips/fips.sh@15 -- # process_shm --id 0 00:16:01.727 16:12:02 -- common/autotest_common.sh@794 -- # type=--id 00:16:01.727 16:12:02 -- common/autotest_common.sh@795 -- # id=0 00:16:01.727 16:12:02 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:01.727 16:12:02 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:01.727 16:12:02 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:01.727 16:12:02 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:01.727 16:12:02 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:01.727 16:12:02 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:01.727 nvmf_trace.0 00:16:01.727 16:12:02 -- common/autotest_common.sh@809 -- # return 0 00:16:01.727 16:12:02 -- fips/fips.sh@16 -- # killprocess 3410550 00:16:01.727 16:12:02 -- common/autotest_common.sh@936 -- # '[' -z 3410550 ']' 00:16:01.727 16:12:02 -- common/autotest_common.sh@940 -- # kill -0 3410550 00:16:01.727 16:12:02 -- common/autotest_common.sh@941 -- # uname 00:16:01.727 16:12:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.728 16:12:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3410550 00:16:01.728 16:12:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:01.728 16:12:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:01.728 16:12:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3410550' 00:16:01.728 killing process with pid 3410550 00:16:01.728 16:12:02 -- common/autotest_common.sh@955 -- # kill 3410550 00:16:01.728 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.728 00:16:01.728 Latency(us) 00:16:01.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.728 =================================================================================================================== 00:16:01.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.728 [2024-04-24 16:12:02.858183] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:01.728 16:12:02 -- common/autotest_common.sh@960 -- # wait 3410550 00:16:01.986 16:12:03 -- fips/fips.sh@17 -- # nvmftestfini 00:16:01.986 16:12:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:01.986 16:12:03 -- nvmf/common.sh@117 -- # sync 00:16:01.986 16:12:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.986 16:12:03 -- nvmf/common.sh@120 -- # set +e 00:16:01.986 16:12:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.986 16:12:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.986 rmmod nvme_tcp 00:16:01.986 rmmod nvme_fabrics 00:16:01.986 rmmod nvme_keyring 00:16:01.986 16:12:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.986 16:12:03 -- nvmf/common.sh@124 -- # set -e 00:16:01.986 16:12:03 -- nvmf/common.sh@125 -- # return 0 00:16:01.986 16:12:03 -- nvmf/common.sh@478 -- # '[' -n 3410402 ']' 00:16:01.986 16:12:03 -- nvmf/common.sh@479 -- # killprocess 3410402 00:16:01.986 16:12:03 -- common/autotest_common.sh@936 -- # '[' -z 3410402 ']' 00:16:01.986 16:12:03 -- common/autotest_common.sh@940 -- # kill -0 3410402 00:16:01.986 16:12:03 -- common/autotest_common.sh@941 -- # uname 00:16:01.986 16:12:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.986 16:12:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3410402 00:16:01.986 16:12:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:01.986 16:12:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:01.986 16:12:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3410402' 00:16:01.986 killing process with pid 3410402 00:16:01.986 16:12:03 -- common/autotest_common.sh@955 -- # kill 3410402 00:16:01.987 [2024-04-24 16:12:03.212406] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:01.987 16:12:03 -- common/autotest_common.sh@960 -- # wait 3410402 00:16:02.245 16:12:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:02.245 16:12:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:02.245 16:12:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:02.245 16:12:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.245 16:12:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.245 16:12:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.245 16:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.245 16:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.773 16:12:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.773 16:12:05 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:04.773 00:16:04.773 real 0m17.169s 00:16:04.773 user 0m21.578s 00:16:04.773 sys 0m6.408s 00:16:04.773 16:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:04.773 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:04.773 ************************************ 00:16:04.773 END TEST nvmf_fips 00:16:04.773 ************************************ 00:16:04.773 16:12:05 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:04.773 16:12:05 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:04.773 16:12:05 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:16:04.773 16:12:05 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:16:04.773 16:12:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.773 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 16:12:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.674 16:12:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.674 16:12:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.674 16:12:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.674 16:12:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.674 16:12:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.674 16:12:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.674 16:12:07 -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.674 16:12:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.674 16:12:07 -- nvmf/common.sh@296 -- # e810=() 00:16:06.674 16:12:07 -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.674 16:12:07 -- nvmf/common.sh@297 -- # x722=() 00:16:06.674 16:12:07 -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.674 16:12:07 -- nvmf/common.sh@298 -- # mlx=() 00:16:06.674 16:12:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.674 16:12:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.674 16:12:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.674 16:12:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.674 16:12:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.674 16:12:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.674 16:12:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:06.674 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:06.674 16:12:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.674 16:12:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:06.674 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:06.674 16:12:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.674 16:12:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.674 16:12:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.674 16:12:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.674 16:12:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.674 16:12:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:06.674 Found net devices under 0000:09:00.0: cvl_0_0 00:16:06.674 16:12:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.674 16:12:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.674 16:12:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.674 16:12:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.674 16:12:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.674 16:12:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:06.674 Found net devices under 0000:09:00.1: cvl_0_1 00:16:06.674 16:12:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.674 16:12:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:06.674 16:12:07 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.674 16:12:07 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:16:06.674 16:12:07 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:06.674 16:12:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:06.674 16:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.674 16:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 ************************************ 00:16:06.674 START TEST nvmf_perf_adq 00:16:06.674 ************************************ 00:16:06.674 16:12:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:06.674 * Looking for test storage... 00:16:06.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.674 16:12:07 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.674 16:12:07 -- nvmf/common.sh@7 -- # uname -s 00:16:06.674 16:12:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.674 16:12:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.674 16:12:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.674 16:12:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.674 16:12:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.674 16:12:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.674 16:12:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.674 16:12:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.674 16:12:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.674 16:12:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.674 16:12:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.674 16:12:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.674 16:12:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.674 16:12:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.674 16:12:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.674 16:12:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.674 16:12:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.674 16:12:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.674 16:12:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.674 16:12:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.674 16:12:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.674 16:12:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.674 16:12:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.674 16:12:07 -- paths/export.sh@5 -- # export PATH 00:16:06.674 16:12:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.674 16:12:07 -- nvmf/common.sh@47 -- # : 0 00:16:06.674 16:12:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.674 16:12:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.674 16:12:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.674 16:12:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.674 16:12:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.674 16:12:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.674 16:12:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.674 16:12:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.674 16:12:07 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:16:06.674 16:12:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.674 16:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:08.575 16:12:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:08.575 16:12:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.575 16:12:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.575 16:12:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.575 16:12:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.575 16:12:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.575 16:12:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.575 16:12:09 -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.575 16:12:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.575 16:12:09 -- nvmf/common.sh@296 -- # e810=() 00:16:08.575 16:12:09 -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.575 16:12:09 -- nvmf/common.sh@297 -- # x722=() 00:16:08.575 16:12:09 -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.575 16:12:09 -- nvmf/common.sh@298 -- # mlx=() 00:16:08.575 16:12:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.575 16:12:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.575 16:12:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.575 16:12:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.575 16:12:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.575 16:12:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.575 16:12:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:08.575 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:08.575 16:12:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.575 16:12:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:08.575 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:08.575 16:12:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.575 16:12:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.575 16:12:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.575 16:12:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.575 16:12:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:08.575 16:12:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.575 16:12:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:08.575 Found net devices under 0000:09:00.0: cvl_0_0 00:16:08.575 16:12:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.575 16:12:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.575 16:12:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.575 16:12:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:08.575 16:12:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.575 16:12:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:08.575 Found net devices under 0000:09:00.1: cvl_0_1 00:16:08.575 16:12:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.575 16:12:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:08.575 16:12:09 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.575 16:12:09 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:16:08.575 16:12:09 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:08.575 16:12:09 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:16:08.575 16:12:09 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:09.143 16:12:10 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:10.520 16:12:11 -- target/perf_adq.sh@54 -- # sleep 5 00:16:15.846 16:12:16 -- target/perf_adq.sh@67 -- # nvmftestinit 00:16:15.846 16:12:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:15.846 16:12:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.846 16:12:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:15.846 16:12:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:15.846 16:12:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:15.846 16:12:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.846 16:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.846 16:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.846 16:12:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:15.846 16:12:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.846 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:16:15.846 16:12:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.846 16:12:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.846 16:12:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.846 16:12:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.846 16:12:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.846 16:12:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.846 16:12:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.846 16:12:16 -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.846 16:12:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.846 16:12:16 -- nvmf/common.sh@296 -- # e810=() 00:16:15.846 16:12:16 -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.846 16:12:16 -- nvmf/common.sh@297 -- # x722=() 00:16:15.846 16:12:16 -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.846 16:12:16 -- nvmf/common.sh@298 -- # mlx=() 00:16:15.846 16:12:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.846 16:12:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.846 16:12:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.846 16:12:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.846 16:12:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.846 16:12:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.846 16:12:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:15.846 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:15.846 16:12:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.846 16:12:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:15.846 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:15.846 16:12:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.846 16:12:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.847 16:12:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.847 16:12:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.847 16:12:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.847 16:12:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.847 16:12:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:15.847 Found net devices under 0000:09:00.0: cvl_0_0 00:16:15.847 16:12:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.847 16:12:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.847 16:12:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.847 16:12:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.847 16:12:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.847 16:12:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:15.847 Found net devices under 0000:09:00.1: cvl_0_1 00:16:15.847 16:12:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.847 16:12:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:15.847 16:12:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:15.847 16:12:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:15.847 16:12:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.847 16:12:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.847 16:12:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.847 16:12:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.847 16:12:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.847 16:12:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.847 16:12:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.847 16:12:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.847 16:12:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.847 16:12:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.847 16:12:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.847 16:12:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.847 16:12:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.847 16:12:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.847 16:12:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.847 16:12:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.847 16:12:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.847 16:12:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.847 16:12:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.847 16:12:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:15.847 00:16:15.847 --- 10.0.0.2 ping statistics --- 00:16:15.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.847 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:15.847 16:12:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:16:15.847 00:16:15.847 --- 10.0.0.1 ping statistics --- 00:16:15.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.847 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:16:15.847 16:12:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.847 16:12:16 -- nvmf/common.sh@411 -- # return 0 00:16:15.847 16:12:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:15.847 16:12:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.847 16:12:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:15.847 16:12:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.847 16:12:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:15.847 16:12:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:15.847 16:12:16 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:15.847 16:12:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:15.847 16:12:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.847 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:16:15.847 16:12:16 -- nvmf/common.sh@470 -- # nvmfpid=3416930 00:16:15.847 16:12:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:15.847 16:12:16 -- nvmf/common.sh@471 -- # waitforlisten 3416930 00:16:15.847 16:12:16 -- common/autotest_common.sh@817 -- # '[' -z 3416930 ']' 00:16:15.847 16:12:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.847 16:12:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.847 16:12:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.847 16:12:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.847 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:16:15.847 [2024-04-24 16:12:16.910491] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:16:15.847 [2024-04-24 16:12:16.910559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.847 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.847 [2024-04-24 16:12:16.981442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.847 [2024-04-24 16:12:17.097953] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.847 [2024-04-24 16:12:17.098008] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.847 [2024-04-24 16:12:17.098022] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.847 [2024-04-24 16:12:17.098048] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.847 [2024-04-24 16:12:17.098059] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.847 [2024-04-24 16:12:17.098113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.847 [2024-04-24 16:12:17.098418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.847 [2024-04-24 16:12:17.098449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.847 [2024-04-24 16:12:17.098451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.106 16:12:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.106 16:12:17 -- common/autotest_common.sh@850 -- # return 0 00:16:16.106 16:12:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:16.106 16:12:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 16:12:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.106 16:12:17 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:16:16.106 16:12:17 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 [2024-04-24 16:12:17.307608] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 Malloc1 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.106 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.106 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.106 [2024-04-24 16:12:17.360878] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.106 16:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.106 16:12:17 -- target/perf_adq.sh@73 -- # perfpid=3417073 00:16:16.106 16:12:17 -- target/perf_adq.sh@74 -- # sleep 2 00:16:16.106 16:12:17 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:16.364 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.266 16:12:19 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:16:18.266 16:12:19 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:18.266 16:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.266 16:12:19 -- target/perf_adq.sh@76 -- # wc -l 00:16:18.266 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:16:18.266 16:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.266 16:12:19 -- target/perf_adq.sh@76 -- # count=4 00:16:18.266 16:12:19 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:16:18.266 16:12:19 -- target/perf_adq.sh@81 -- # wait 3417073 00:16:26.377 Initializing NVMe Controllers 00:16:26.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:26.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:26.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:26.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:26.377 Initialization complete. Launching workers. 00:16:26.377 ======================================================== 00:16:26.377 Latency(us) 00:16:26.377 Device Information : IOPS MiB/s Average min max 00:16:26.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10896.36 42.56 5873.99 1951.73 9016.08 00:16:26.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8580.59 33.52 7460.78 2929.85 12841.48 00:16:26.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10482.96 40.95 6105.92 2231.33 9617.61 00:16:26.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9146.78 35.73 7004.74 2834.84 43398.41 00:16:26.377 ======================================================== 00:16:26.377 Total : 39106.68 152.76 6548.80 1951.73 43398.41 00:16:26.377 00:16:26.377 16:12:27 -- target/perf_adq.sh@82 -- # nvmftestfini 00:16:26.377 16:12:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:26.377 16:12:27 -- nvmf/common.sh@117 -- # sync 00:16:26.377 16:12:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.377 16:12:27 -- nvmf/common.sh@120 -- # set +e 00:16:26.377 16:12:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.377 16:12:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.377 rmmod nvme_tcp 00:16:26.377 rmmod nvme_fabrics 00:16:26.377 rmmod nvme_keyring 00:16:26.377 16:12:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.377 16:12:27 -- nvmf/common.sh@124 -- # set -e 00:16:26.377 16:12:27 -- nvmf/common.sh@125 -- # return 0 00:16:26.377 16:12:27 -- nvmf/common.sh@478 -- # '[' -n 3416930 ']' 00:16:26.377 16:12:27 -- nvmf/common.sh@479 -- # killprocess 3416930 00:16:26.377 16:12:27 -- common/autotest_common.sh@936 -- # '[' -z 3416930 ']' 00:16:26.377 16:12:27 -- common/autotest_common.sh@940 -- # kill -0 3416930 00:16:26.377 16:12:27 -- common/autotest_common.sh@941 -- # uname 00:16:26.377 16:12:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.377 16:12:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3416930 00:16:26.377 16:12:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.377 16:12:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.377 16:12:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3416930' 00:16:26.377 killing process with pid 3416930 00:16:26.377 16:12:27 -- common/autotest_common.sh@955 -- # kill 3416930 00:16:26.377 16:12:27 -- common/autotest_common.sh@960 -- # wait 3416930 00:16:26.941 16:12:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:26.941 16:12:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:26.941 16:12:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:26.941 16:12:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.941 16:12:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.941 16:12:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.941 16:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.941 16:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.842 16:12:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.842 16:12:29 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:16:28.842 16:12:29 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:29.408 16:12:30 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:30.783 16:12:32 -- target/perf_adq.sh@54 -- # sleep 5 00:16:36.054 16:12:37 -- target/perf_adq.sh@87 -- # nvmftestinit 00:16:36.054 16:12:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:36.054 16:12:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.054 16:12:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:36.054 16:12:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:36.054 16:12:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:36.054 16:12:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.054 16:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.054 16:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.054 16:12:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:36.054 16:12:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:36.054 16:12:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.054 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.054 16:12:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:36.055 16:12:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.055 16:12:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.055 16:12:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.055 16:12:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.055 16:12:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.055 16:12:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.055 16:12:37 -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.055 16:12:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.055 16:12:37 -- nvmf/common.sh@296 -- # e810=() 00:16:36.055 16:12:37 -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.055 16:12:37 -- nvmf/common.sh@297 -- # x722=() 00:16:36.055 16:12:37 -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.055 16:12:37 -- nvmf/common.sh@298 -- # mlx=() 00:16:36.055 16:12:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.055 16:12:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.055 16:12:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.055 16:12:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:36.055 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:36.055 16:12:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.055 16:12:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:36.055 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:36.055 16:12:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.055 16:12:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.055 16:12:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.055 16:12:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:36.055 Found net devices under 0000:09:00.0: cvl_0_0 00:16:36.055 16:12:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.055 16:12:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.055 16:12:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.055 16:12:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:36.055 Found net devices under 0000:09:00.1: cvl_0_1 00:16:36.055 16:12:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:36.055 16:12:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:36.055 16:12:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.055 16:12:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.055 16:12:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.055 16:12:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.055 16:12:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.055 16:12:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.055 16:12:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.055 16:12:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.055 16:12:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.055 16:12:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.055 16:12:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.055 16:12:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.055 16:12:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.055 16:12:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.055 16:12:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.055 16:12:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.055 16:12:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.055 16:12:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.055 16:12:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:16:36.055 00:16:36.055 --- 10.0.0.2 ping statistics --- 00:16:36.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.055 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:16:36.055 16:12:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:16:36.055 00:16:36.055 --- 10.0.0.1 ping statistics --- 00:16:36.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.055 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:36.055 16:12:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.055 16:12:37 -- nvmf/common.sh@411 -- # return 0 00:16:36.055 16:12:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:36.055 16:12:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.055 16:12:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:36.055 16:12:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.055 16:12:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:36.055 16:12:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:36.055 16:12:37 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:16:36.055 16:12:37 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:36.055 16:12:37 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:36.055 16:12:37 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:36.055 net.core.busy_poll = 1 00:16:36.055 16:12:37 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:36.055 net.core.busy_read = 1 00:16:36.055 16:12:37 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:36.055 16:12:37 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:36.055 16:12:37 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:36.055 16:12:37 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:36.055 16:12:37 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:36.313 16:12:37 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:36.313 16:12:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:36.313 16:12:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:36.313 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.313 16:12:37 -- nvmf/common.sh@470 -- # nvmfpid=3419571 00:16:36.313 16:12:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:36.313 16:12:37 -- nvmf/common.sh@471 -- # waitforlisten 3419571 00:16:36.313 16:12:37 -- common/autotest_common.sh@817 -- # '[' -z 3419571 ']' 00:16:36.313 16:12:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.313 16:12:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:36.313 16:12:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.313 16:12:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:36.313 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.313 [2024-04-24 16:12:37.395710] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:16:36.313 [2024-04-24 16:12:37.395831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.313 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.313 [2024-04-24 16:12:37.461068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.313 [2024-04-24 16:12:37.567348] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.313 [2024-04-24 16:12:37.567401] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.313 [2024-04-24 16:12:37.567431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.313 [2024-04-24 16:12:37.567444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.313 [2024-04-24 16:12:37.567455] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.313 [2024-04-24 16:12:37.567523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.313 [2024-04-24 16:12:37.567614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.313 [2024-04-24 16:12:37.567681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.313 [2024-04-24 16:12:37.567684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.571 16:12:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:36.571 16:12:37 -- common/autotest_common.sh@850 -- # return 0 00:16:36.571 16:12:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:36.571 16:12:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:36.571 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:12:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.571 16:12:37 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:16:36.571 16:12:37 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:36.571 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.571 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.571 16:12:37 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:36.571 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.571 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.571 16:12:37 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:36.572 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.572 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 [2024-04-24 16:12:37.749703] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.572 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.572 16:12:37 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:36.572 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.572 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 Malloc1 00:16:36.572 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.572 16:12:37 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.572 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.572 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.572 16:12:37 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.572 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.572 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.572 16:12:37 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.572 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.572 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 [2024-04-24 16:12:37.802775] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.572 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.572 16:12:37 -- target/perf_adq.sh@94 -- # perfpid=3419712 00:16:36.572 16:12:37 -- target/perf_adq.sh@95 -- # sleep 2 00:16:36.572 16:12:37 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:36.572 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.102 16:12:39 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:16:39.102 16:12:39 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:39.102 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.102 16:12:39 -- target/perf_adq.sh@97 -- # wc -l 00:16:39.102 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:16:39.102 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.102 16:12:39 -- target/perf_adq.sh@97 -- # count=3 00:16:39.102 16:12:39 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:16:39.102 16:12:39 -- target/perf_adq.sh@103 -- # wait 3419712 00:16:47.329 Initializing NVMe Controllers 00:16:47.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:47.329 Initialization complete. Launching workers. 00:16:47.329 ======================================================== 00:16:47.329 Latency(us) 00:16:47.329 Device Information : IOPS MiB/s Average min max 00:16:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4839.10 18.90 13237.01 2135.21 61663.50 00:16:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4267.40 16.67 15007.08 1602.13 62887.90 00:16:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4824.10 18.84 13318.08 2039.22 61706.24 00:16:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4075.50 15.92 15714.23 3539.28 63063.94 00:16:47.329 ======================================================== 00:16:47.329 Total : 18006.10 70.34 14238.93 1602.13 63063.94 00:16:47.329 00:16:47.329 16:12:47 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:47.329 16:12:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.329 16:12:47 -- nvmf/common.sh@117 -- # sync 00:16:47.329 16:12:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.329 16:12:47 -- nvmf/common.sh@120 -- # set +e 00:16:47.329 16:12:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.329 16:12:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.329 rmmod nvme_tcp 00:16:47.329 rmmod nvme_fabrics 00:16:47.329 rmmod nvme_keyring 00:16:47.329 16:12:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.329 16:12:48 -- nvmf/common.sh@124 -- # set -e 00:16:47.329 16:12:48 -- nvmf/common.sh@125 -- # return 0 00:16:47.329 16:12:48 -- nvmf/common.sh@478 -- # '[' -n 3419571 ']' 00:16:47.329 16:12:48 -- nvmf/common.sh@479 -- # killprocess 3419571 00:16:47.329 16:12:48 -- common/autotest_common.sh@936 -- # '[' -z 3419571 ']' 00:16:47.329 16:12:48 -- common/autotest_common.sh@940 -- # kill -0 3419571 00:16:47.329 16:12:48 -- common/autotest_common.sh@941 -- # uname 00:16:47.329 16:12:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.329 16:12:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3419571 00:16:47.329 16:12:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:47.329 16:12:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:47.329 16:12:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3419571' 00:16:47.329 killing process with pid 3419571 00:16:47.329 16:12:48 -- common/autotest_common.sh@955 -- # kill 3419571 00:16:47.329 16:12:48 -- common/autotest_common.sh@960 -- # wait 3419571 00:16:47.329 16:12:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:47.329 16:12:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:47.329 16:12:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:47.329 16:12:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.329 16:12:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.329 16:12:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.329 16:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.329 16:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.231 16:12:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.231 16:12:50 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:49.231 00:16:49.231 real 0m42.796s 00:16:49.231 user 2m30.547s 00:16:49.231 sys 0m12.993s 00:16:49.231 16:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.231 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:49.231 ************************************ 00:16:49.231 END TEST nvmf_perf_adq 00:16:49.231 ************************************ 00:16:49.231 16:12:50 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:49.231 16:12:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.231 16:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.231 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:49.490 ************************************ 00:16:49.490 START TEST nvmf_shutdown 00:16:49.490 ************************************ 00:16:49.490 16:12:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:49.490 * Looking for test storage... 00:16:49.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.490 16:12:50 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.490 16:12:50 -- nvmf/common.sh@7 -- # uname -s 00:16:49.490 16:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.490 16:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.490 16:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.490 16:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.490 16:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.490 16:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.490 16:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.490 16:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.490 16:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.490 16:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.490 16:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.490 16:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.490 16:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.490 16:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.490 16:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.490 16:12:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.490 16:12:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.490 16:12:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.490 16:12:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.490 16:12:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.490 16:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.490 16:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.490 16:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.490 16:12:50 -- paths/export.sh@5 -- # export PATH 00:16:49.490 16:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.490 16:12:50 -- nvmf/common.sh@47 -- # : 0 00:16:49.490 16:12:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.490 16:12:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.490 16:12:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.490 16:12:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.490 16:12:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.490 16:12:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.490 16:12:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.490 16:12:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.490 16:12:50 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.490 16:12:50 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.490 16:12:50 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:49.490 16:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:49.490 16:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.490 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:49.490 ************************************ 00:16:49.490 START TEST nvmf_shutdown_tc1 00:16:49.490 ************************************ 00:16:49.490 16:12:50 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:49.490 16:12:50 -- target/shutdown.sh@74 -- # starttarget 00:16:49.490 16:12:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:49.490 16:12:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:49.490 16:12:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.490 16:12:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:49.490 16:12:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:49.491 16:12:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:49.491 16:12:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.491 16:12:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.491 16:12:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.491 16:12:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:49.491 16:12:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:49.491 16:12:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.491 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 16:12:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.388 16:12:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.388 16:12:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.388 16:12:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.388 16:12:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.388 16:12:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.388 16:12:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.388 16:12:52 -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.388 16:12:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.388 16:12:52 -- nvmf/common.sh@296 -- # e810=() 00:16:51.388 16:12:52 -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.388 16:12:52 -- nvmf/common.sh@297 -- # x722=() 00:16:51.388 16:12:52 -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.388 16:12:52 -- nvmf/common.sh@298 -- # mlx=() 00:16:51.388 16:12:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.388 16:12:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.388 16:12:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.388 16:12:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.388 16:12:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.388 16:12:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.388 16:12:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.388 16:12:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.388 16:12:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.388 16:12:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:51.388 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:51.388 16:12:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.388 16:12:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.388 16:12:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.389 16:12:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:51.389 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:51.389 16:12:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.389 16:12:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.389 16:12:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.389 16:12:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.389 16:12:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.389 16:12:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:51.389 Found net devices under 0000:09:00.0: cvl_0_0 00:16:51.389 16:12:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.389 16:12:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.389 16:12:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.389 16:12:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.389 16:12:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.389 16:12:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:51.389 Found net devices under 0000:09:00.1: cvl_0_1 00:16:51.389 16:12:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.389 16:12:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:51.389 16:12:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:51.389 16:12:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:51.389 16:12:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.389 16:12:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.389 16:12:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.389 16:12:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.389 16:12:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.389 16:12:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.389 16:12:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.389 16:12:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.389 16:12:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.389 16:12:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.389 16:12:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.389 16:12:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.389 16:12:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.389 16:12:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.389 16:12:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.389 16:12:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.389 16:12:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.389 16:12:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.389 16:12:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.389 16:12:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:16:51.389 00:16:51.389 --- 10.0.0.2 ping statistics --- 00:16:51.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.389 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:16:51.389 16:12:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:51.389 00:16:51.389 --- 10.0.0.1 ping statistics --- 00:16:51.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.389 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:51.389 16:12:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.389 16:12:52 -- nvmf/common.sh@411 -- # return 0 00:16:51.389 16:12:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:51.389 16:12:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.389 16:12:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:51.389 16:12:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.389 16:12:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:51.389 16:12:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:51.389 16:12:52 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:51.389 16:12:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:51.389 16:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:51.389 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:16:51.389 16:12:52 -- nvmf/common.sh@470 -- # nvmfpid=3422888 00:16:51.389 16:12:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.389 16:12:52 -- nvmf/common.sh@471 -- # waitforlisten 3422888 00:16:51.389 16:12:52 -- common/autotest_common.sh@817 -- # '[' -z 3422888 ']' 00:16:51.389 16:12:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.389 16:12:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.389 16:12:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.389 16:12:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.389 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:16:51.389 [2024-04-24 16:12:52.673141] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:16:51.389 [2024-04-24 16:12:52.673209] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.647 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.647 [2024-04-24 16:12:52.740370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.647 [2024-04-24 16:12:52.843447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.647 [2024-04-24 16:12:52.843493] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.647 [2024-04-24 16:12:52.843521] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.647 [2024-04-24 16:12:52.843534] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.647 [2024-04-24 16:12:52.843544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.647 [2024-04-24 16:12:52.843669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.647 [2024-04-24 16:12:52.843765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.647 [2024-04-24 16:12:52.843819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.647 [2024-04-24 16:12:52.843816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.905 16:12:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.905 16:12:52 -- common/autotest_common.sh@850 -- # return 0 00:16:51.905 16:12:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:51.905 16:12:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.905 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 16:12:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.905 16:12:52 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.905 16:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.905 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 [2024-04-24 16:12:52.997520] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.905 16:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.905 16:12:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:51.905 16:12:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:51.905 16:12:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:51.905 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 16:12:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.905 16:12:53 -- target/shutdown.sh@28 -- # cat 00:16:51.905 16:12:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:51.905 16:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.905 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 Malloc1 00:16:51.905 [2024-04-24 16:12:53.087305] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.905 Malloc2 00:16:51.905 Malloc3 00:16:52.163 Malloc4 00:16:52.163 Malloc5 00:16:52.163 Malloc6 00:16:52.163 Malloc7 00:16:52.163 Malloc8 00:16:52.421 Malloc9 00:16:52.421 Malloc10 00:16:52.421 16:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.421 16:12:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:52.421 16:12:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.421 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 16:12:53 -- target/shutdown.sh@78 -- # perfpid=3423022 00:16:52.421 16:12:53 -- target/shutdown.sh@79 -- # waitforlisten 3423022 /var/tmp/bdevperf.sock 00:16:52.421 16:12:53 -- common/autotest_common.sh@817 -- # '[' -z 3423022 ']' 00:16:52.421 16:12:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.421 16:12:53 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:52.421 16:12:53 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:52.421 16:12:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:52.421 16:12:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.421 16:12:53 -- nvmf/common.sh@521 -- # config=() 00:16:52.421 16:12:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:52.421 16:12:53 -- nvmf/common.sh@521 -- # local subsystem config 00:16:52.421 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.421 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.421 { 00:16:52.421 "params": { 00:16:52.421 "name": "Nvme$subsystem", 00:16:52.421 "trtype": "$TEST_TRANSPORT", 00:16:52.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.421 "adrfam": "ipv4", 00:16:52.421 "trsvcid": "$NVMF_PORT", 00:16:52.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.421 "hdgst": ${hdgst:-false}, 00:16:52.421 "ddgst": ${ddgst:-false} 00:16:52.421 }, 00:16:52.421 "method": "bdev_nvme_attach_controller" 00:16:52.421 } 00:16:52.421 EOF 00:16:52.421 )") 00:16:52.421 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.421 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.421 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.421 { 00:16:52.421 "params": { 00:16:52.421 "name": "Nvme$subsystem", 00:16:52.421 "trtype": "$TEST_TRANSPORT", 00:16:52.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.421 "adrfam": "ipv4", 00:16:52.421 "trsvcid": "$NVMF_PORT", 00:16:52.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.421 "hdgst": ${hdgst:-false}, 00:16:52.421 "ddgst": ${ddgst:-false} 00:16:52.421 }, 00:16:52.421 "method": "bdev_nvme_attach_controller" 00:16:52.421 } 00:16:52.421 EOF 00:16:52.421 )") 00:16:52.421 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.421 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.422 { 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme$subsystem", 00:16:52.422 "trtype": "$TEST_TRANSPORT", 00:16:52.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "$NVMF_PORT", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.422 "hdgst": ${hdgst:-false}, 00:16:52.422 "ddgst": ${ddgst:-false} 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 } 00:16:52.422 EOF 00:16:52.422 )") 00:16:52.422 16:12:53 -- nvmf/common.sh@543 -- # cat 00:16:52.422 16:12:53 -- nvmf/common.sh@545 -- # jq . 00:16:52.422 16:12:53 -- nvmf/common.sh@546 -- # IFS=, 00:16:52.422 16:12:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme1", 00:16:52.422 "trtype": "tcp", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "4420", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.422 "hdgst": false, 00:16:52.422 "ddgst": false 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 },{ 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme2", 00:16:52.422 "trtype": "tcp", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "4420", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:52.422 "hdgst": false, 00:16:52.422 "ddgst": false 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 },{ 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme3", 00:16:52.422 "trtype": "tcp", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "4420", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:52.422 "hdgst": false, 00:16:52.422 "ddgst": false 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 },{ 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme4", 00:16:52.422 "trtype": "tcp", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "4420", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:52.422 "hdgst": false, 00:16:52.422 "ddgst": false 00:16:52.422 }, 00:16:52.422 "method": "bdev_nvme_attach_controller" 00:16:52.422 },{ 00:16:52.422 "params": { 00:16:52.422 "name": "Nvme5", 00:16:52.422 "trtype": "tcp", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "adrfam": "ipv4", 00:16:52.422 "trsvcid": "4420", 00:16:52.422 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:52.422 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 },{ 00:16:52.423 "params": { 00:16:52.423 "name": "Nvme6", 00:16:52.423 "trtype": "tcp", 00:16:52.423 "traddr": "10.0.0.2", 00:16:52.423 "adrfam": "ipv4", 00:16:52.423 "trsvcid": "4420", 00:16:52.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:52.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 },{ 00:16:52.423 "params": { 00:16:52.423 "name": "Nvme7", 00:16:52.423 "trtype": "tcp", 00:16:52.423 "traddr": "10.0.0.2", 00:16:52.423 "adrfam": "ipv4", 00:16:52.423 "trsvcid": "4420", 00:16:52.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:52.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 },{ 00:16:52.423 "params": { 00:16:52.423 "name": "Nvme8", 00:16:52.423 "trtype": "tcp", 00:16:52.423 "traddr": "10.0.0.2", 00:16:52.423 "adrfam": "ipv4", 00:16:52.423 "trsvcid": "4420", 00:16:52.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:52.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 },{ 00:16:52.423 "params": { 00:16:52.423 "name": "Nvme9", 00:16:52.423 "trtype": "tcp", 00:16:52.423 "traddr": "10.0.0.2", 00:16:52.423 "adrfam": "ipv4", 00:16:52.423 "trsvcid": "4420", 00:16:52.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:52.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 },{ 00:16:52.423 "params": { 00:16:52.423 "name": "Nvme10", 00:16:52.423 "trtype": "tcp", 00:16:52.423 "traddr": "10.0.0.2", 00:16:52.423 "adrfam": "ipv4", 00:16:52.423 "trsvcid": "4420", 00:16:52.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:52.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:52.423 "hdgst": false, 00:16:52.423 "ddgst": false 00:16:52.423 }, 00:16:52.423 "method": "bdev_nvme_attach_controller" 00:16:52.423 }' 00:16:52.423 [2024-04-24 16:12:53.595775] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:16:52.423 [2024-04-24 16:12:53.595854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:52.423 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.423 [2024-04-24 16:12:53.658653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.681 [2024-04-24 16:12:53.762717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.577 16:12:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.577 16:12:55 -- common/autotest_common.sh@850 -- # return 0 00:16:54.577 16:12:55 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:54.577 16:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.577 16:12:55 -- common/autotest_common.sh@10 -- # set +x 00:16:54.577 16:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.577 16:12:55 -- target/shutdown.sh@83 -- # kill -9 3423022 00:16:54.577 16:12:55 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:54.577 16:12:55 -- target/shutdown.sh@87 -- # sleep 1 00:16:55.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3423022 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:55.525 16:12:56 -- target/shutdown.sh@88 -- # kill -0 3422888 00:16:55.525 16:12:56 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:55.525 16:12:56 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:55.525 16:12:56 -- nvmf/common.sh@521 -- # config=() 00:16:55.525 16:12:56 -- nvmf/common.sh@521 -- # local subsystem config 00:16:55.525 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.525 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.525 { 00:16:55.525 "params": { 00:16:55.525 "name": "Nvme$subsystem", 00:16:55.525 "trtype": "$TEST_TRANSPORT", 00:16:55.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.525 "adrfam": "ipv4", 00:16:55.525 "trsvcid": "$NVMF_PORT", 00:16:55.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.525 "hdgst": ${hdgst:-false}, 00:16:55.525 "ddgst": ${ddgst:-false} 00:16:55.525 }, 00:16:55.525 "method": "bdev_nvme_attach_controller" 00:16:55.525 } 00:16:55.525 EOF 00:16:55.525 )") 00:16:55.525 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.525 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.525 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.525 { 00:16:55.525 "params": { 00:16:55.525 "name": "Nvme$subsystem", 00:16:55.525 "trtype": "$TEST_TRANSPORT", 00:16:55.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.525 "adrfam": "ipv4", 00:16:55.525 "trsvcid": "$NVMF_PORT", 00:16:55.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.525 "hdgst": ${hdgst:-false}, 00:16:55.525 "ddgst": ${ddgst:-false} 00:16:55.525 }, 00:16:55.525 "method": "bdev_nvme_attach_controller" 00:16:55.525 } 00:16:55.525 EOF 00:16:55.525 )") 00:16:55.525 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.525 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.525 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.525 { 00:16:55.525 "params": { 00:16:55.525 "name": "Nvme$subsystem", 00:16:55.525 "trtype": "$TEST_TRANSPORT", 00:16:55.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.525 "adrfam": "ipv4", 00:16:55.525 "trsvcid": "$NVMF_PORT", 00:16:55.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.525 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:55.526 { 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme$subsystem", 00:16:55.526 "trtype": "$TEST_TRANSPORT", 00:16:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "$NVMF_PORT", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.526 "hdgst": ${hdgst:-false}, 00:16:55.526 "ddgst": ${ddgst:-false} 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 } 00:16:55.526 EOF 00:16:55.526 )") 00:16:55.526 16:12:56 -- nvmf/common.sh@543 -- # cat 00:16:55.526 16:12:56 -- nvmf/common.sh@545 -- # jq . 00:16:55.526 16:12:56 -- nvmf/common.sh@546 -- # IFS=, 00:16:55.526 16:12:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme1", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme2", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme3", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme4", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme5", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme6", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "adrfam": "ipv4", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:55.526 "hdgst": false, 00:16:55.526 "ddgst": false 00:16:55.526 }, 00:16:55.526 "method": "bdev_nvme_attach_controller" 00:16:55.526 },{ 00:16:55.526 "params": { 00:16:55.526 "name": "Nvme7", 00:16:55.526 "trtype": "tcp", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.527 "adrfam": "ipv4", 00:16:55.527 "trsvcid": "4420", 00:16:55.527 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:55.527 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:55.527 "hdgst": false, 00:16:55.527 "ddgst": false 00:16:55.527 }, 00:16:55.527 "method": "bdev_nvme_attach_controller" 00:16:55.527 },{ 00:16:55.527 "params": { 00:16:55.527 "name": "Nvme8", 00:16:55.527 "trtype": "tcp", 00:16:55.527 "traddr": "10.0.0.2", 00:16:55.527 "adrfam": "ipv4", 00:16:55.527 "trsvcid": "4420", 00:16:55.527 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:55.527 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:55.527 "hdgst": false, 00:16:55.527 "ddgst": false 00:16:55.527 }, 00:16:55.527 "method": "bdev_nvme_attach_controller" 00:16:55.527 },{ 00:16:55.527 "params": { 00:16:55.527 "name": "Nvme9", 00:16:55.527 "trtype": "tcp", 00:16:55.527 "traddr": "10.0.0.2", 00:16:55.527 "adrfam": "ipv4", 00:16:55.527 "trsvcid": "4420", 00:16:55.527 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:55.527 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:55.527 "hdgst": false, 00:16:55.527 "ddgst": false 00:16:55.527 }, 00:16:55.527 "method": "bdev_nvme_attach_controller" 00:16:55.527 },{ 00:16:55.527 "params": { 00:16:55.527 "name": "Nvme10", 00:16:55.527 "trtype": "tcp", 00:16:55.527 "traddr": "10.0.0.2", 00:16:55.527 "adrfam": "ipv4", 00:16:55.527 "trsvcid": "4420", 00:16:55.527 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:55.527 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:55.527 "hdgst": false, 00:16:55.527 "ddgst": false 00:16:55.527 }, 00:16:55.527 "method": "bdev_nvme_attach_controller" 00:16:55.527 }' 00:16:55.527 [2024-04-24 16:12:56.608608] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:16:55.527 [2024-04-24 16:12:56.608707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423369 ] 00:16:55.527 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.527 [2024-04-24 16:12:56.673694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.527 [2024-04-24 16:12:56.781979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.422 Running I/O for 1 seconds... 00:16:58.354 00:16:58.354 Latency(us) 00:16:58.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.354 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme1n1 : 1.15 221.74 13.86 0.00 0.00 285888.47 24369.68 253211.69 00:16:58.354 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme2n1 : 1.16 220.01 13.75 0.00 0.00 283476.01 23787.14 265639.25 00:16:58.354 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme3n1 : 1.09 243.93 15.25 0.00 0.00 240249.39 16505.36 259425.47 00:16:58.354 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme4n1 : 1.10 233.35 14.58 0.00 0.00 257856.09 20680.25 251658.24 00:16:58.354 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme5n1 : 1.17 218.35 13.65 0.00 0.00 270483.53 23010.42 250104.79 00:16:58.354 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme6n1 : 1.19 215.78 13.49 0.00 0.00 271064.56 22136.60 282727.16 00:16:58.354 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme7n1 : 1.19 322.29 20.14 0.00 0.00 177874.05 10097.40 246997.90 00:16:58.354 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme8n1 : 1.20 266.69 16.67 0.00 0.00 212466.35 22136.60 223696.21 00:16:58.354 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme9n1 : 1.18 216.79 13.55 0.00 0.00 256413.20 23204.60 262532.36 00:16:58.354 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.354 Verification LBA range: start 0x0 length 0x400 00:16:58.354 Nvme10n1 : 1.18 216.20 13.51 0.00 0.00 252922.88 22039.51 268746.15 00:16:58.354 =================================================================================================================== 00:16:58.354 Total : 2375.14 148.45 0.00 0.00 246556.01 10097.40 282727.16 00:16:58.612 16:12:59 -- target/shutdown.sh@94 -- # stoptarget 00:16:58.612 16:12:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:58.612 16:12:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:58.612 16:12:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:58.612 16:12:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:58.612 16:12:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:58.612 16:12:59 -- nvmf/common.sh@117 -- # sync 00:16:58.612 16:12:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.612 16:12:59 -- nvmf/common.sh@120 -- # set +e 00:16:58.612 16:12:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.612 16:12:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.612 rmmod nvme_tcp 00:16:58.612 rmmod nvme_fabrics 00:16:58.612 rmmod nvme_keyring 00:16:58.612 16:12:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.612 16:12:59 -- nvmf/common.sh@124 -- # set -e 00:16:58.612 16:12:59 -- nvmf/common.sh@125 -- # return 0 00:16:58.612 16:12:59 -- nvmf/common.sh@478 -- # '[' -n 3422888 ']' 00:16:58.612 16:12:59 -- nvmf/common.sh@479 -- # killprocess 3422888 00:16:58.612 16:12:59 -- common/autotest_common.sh@936 -- # '[' -z 3422888 ']' 00:16:58.612 16:12:59 -- common/autotest_common.sh@940 -- # kill -0 3422888 00:16:58.612 16:12:59 -- common/autotest_common.sh@941 -- # uname 00:16:58.612 16:12:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.612 16:12:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3422888 00:16:58.612 16:12:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:58.612 16:12:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:58.612 16:12:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3422888' 00:16:58.612 killing process with pid 3422888 00:16:58.612 16:12:59 -- common/autotest_common.sh@955 -- # kill 3422888 00:16:58.612 16:12:59 -- common/autotest_common.sh@960 -- # wait 3422888 00:16:59.177 16:13:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:59.177 16:13:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:59.178 16:13:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:59.178 16:13:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.178 16:13:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.178 16:13:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.178 16:13:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.178 16:13:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.709 16:13:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.709 00:17:01.709 real 0m11.737s 00:17:01.709 user 0m34.311s 00:17:01.709 sys 0m3.212s 00:17:01.709 16:13:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.709 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 ************************************ 00:17:01.709 END TEST nvmf_shutdown_tc1 00:17:01.709 ************************************ 00:17:01.709 16:13:02 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:01.709 16:13:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:01.709 16:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.709 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 ************************************ 00:17:01.709 START TEST nvmf_shutdown_tc2 00:17:01.709 ************************************ 00:17:01.709 16:13:02 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:17:01.709 16:13:02 -- target/shutdown.sh@99 -- # starttarget 00:17:01.709 16:13:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:01.709 16:13:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:01.709 16:13:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.709 16:13:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:01.709 16:13:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:01.709 16:13:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:01.709 16:13:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.709 16:13:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.709 16:13:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.709 16:13:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:01.709 16:13:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:01.709 16:13:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.709 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 16:13:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.709 16:13:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.709 16:13:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.709 16:13:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.709 16:13:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.709 16:13:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.709 16:13:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.709 16:13:02 -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.709 16:13:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.709 16:13:02 -- nvmf/common.sh@296 -- # e810=() 00:17:01.709 16:13:02 -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.709 16:13:02 -- nvmf/common.sh@297 -- # x722=() 00:17:01.709 16:13:02 -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.709 16:13:02 -- nvmf/common.sh@298 -- # mlx=() 00:17:01.709 16:13:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.709 16:13:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.709 16:13:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.710 16:13:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.710 16:13:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:01.710 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:01.710 16:13:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.710 16:13:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:01.710 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:01.710 16:13:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.710 16:13:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.710 16:13:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.710 16:13:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:01.710 Found net devices under 0000:09:00.0: cvl_0_0 00:17:01.710 16:13:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.710 16:13:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.710 16:13:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.710 16:13:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:01.710 Found net devices under 0000:09:00.1: cvl_0_1 00:17:01.710 16:13:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:01.710 16:13:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:01.710 16:13:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.710 16:13:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.710 16:13:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.710 16:13:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.710 16:13:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.710 16:13:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.710 16:13:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.710 16:13:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.710 16:13:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.710 16:13:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.710 16:13:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.710 16:13:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.710 16:13:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.710 16:13:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.710 16:13:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.710 16:13:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.710 16:13:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.710 16:13:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.710 16:13:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:01.710 00:17:01.710 --- 10.0.0.2 ping statistics --- 00:17:01.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.710 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:01.710 16:13:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:17:01.710 00:17:01.710 --- 10.0.0.1 ping statistics --- 00:17:01.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.710 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:17:01.710 16:13:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.710 16:13:02 -- nvmf/common.sh@411 -- # return 0 00:17:01.710 16:13:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:01.710 16:13:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.710 16:13:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:01.710 16:13:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.710 16:13:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:01.710 16:13:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:01.710 16:13:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:01.710 16:13:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:01.710 16:13:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:01.710 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.710 16:13:02 -- nvmf/common.sh@470 -- # nvmfpid=3424265 00:17:01.710 16:13:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:01.710 16:13:02 -- nvmf/common.sh@471 -- # waitforlisten 3424265 00:17:01.710 16:13:02 -- common/autotest_common.sh@817 -- # '[' -z 3424265 ']' 00:17:01.710 16:13:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.710 16:13:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:01.710 16:13:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.710 16:13:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:01.710 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.710 [2024-04-24 16:13:02.767208] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:01.710 [2024-04-24 16:13:02.767291] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.710 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.710 [2024-04-24 16:13:02.835386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.710 [2024-04-24 16:13:02.948593] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.710 [2024-04-24 16:13:02.948666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.710 [2024-04-24 16:13:02.948683] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.710 [2024-04-24 16:13:02.948697] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.710 [2024-04-24 16:13:02.948709] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.710 [2024-04-24 16:13:02.948840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.710 [2024-04-24 16:13:02.948919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.710 [2024-04-24 16:13:02.948975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.710 [2024-04-24 16:13:02.948972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.694 16:13:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:02.694 16:13:03 -- common/autotest_common.sh@850 -- # return 0 00:17:02.694 16:13:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:02.694 16:13:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:02.694 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.694 16:13:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.694 16:13:03 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.694 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.694 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.694 [2024-04-24 16:13:03.743663] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.694 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.694 16:13:03 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:02.694 16:13:03 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:02.694 16:13:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:02.694 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.694 16:13:03 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:02.694 16:13:03 -- target/shutdown.sh@28 -- # cat 00:17:02.694 16:13:03 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:02.694 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.694 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.694 Malloc1 00:17:02.694 [2024-04-24 16:13:03.818541] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.694 Malloc2 00:17:02.694 Malloc3 00:17:02.694 Malloc4 00:17:02.952 Malloc5 00:17:02.952 Malloc6 00:17:02.952 Malloc7 00:17:02.952 Malloc8 00:17:02.952 Malloc9 00:17:02.952 Malloc10 00:17:03.210 16:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.210 16:13:04 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:03.210 16:13:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:03.210 16:13:04 -- common/autotest_common.sh@10 -- # set +x 00:17:03.210 16:13:04 -- target/shutdown.sh@103 -- # perfpid=3424448 00:17:03.210 16:13:04 -- target/shutdown.sh@104 -- # waitforlisten 3424448 /var/tmp/bdevperf.sock 00:17:03.210 16:13:04 -- common/autotest_common.sh@817 -- # '[' -z 3424448 ']' 00:17:03.210 16:13:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.210 16:13:04 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:03.210 16:13:04 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:03.210 16:13:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.210 16:13:04 -- nvmf/common.sh@521 -- # config=() 00:17:03.210 16:13:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.210 16:13:04 -- nvmf/common.sh@521 -- # local subsystem config 00:17:03.210 16:13:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- common/autotest_common.sh@10 -- # set +x 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.210 "adrfam": "ipv4", 00:17:03.210 "trsvcid": "$NVMF_PORT", 00:17:03.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.210 "hdgst": ${hdgst:-false}, 00:17:03.210 "ddgst": ${ddgst:-false} 00:17:03.210 }, 00:17:03.210 "method": "bdev_nvme_attach_controller" 00:17:03.210 } 00:17:03.210 EOF 00:17:03.210 )") 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.210 "adrfam": "ipv4", 00:17:03.210 "trsvcid": "$NVMF_PORT", 00:17:03.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.210 "hdgst": ${hdgst:-false}, 00:17:03.210 "ddgst": ${ddgst:-false} 00:17:03.210 }, 00:17:03.210 "method": "bdev_nvme_attach_controller" 00:17:03.210 } 00:17:03.210 EOF 00:17:03.210 )") 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.210 "adrfam": "ipv4", 00:17:03.210 "trsvcid": "$NVMF_PORT", 00:17:03.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.210 "hdgst": ${hdgst:-false}, 00:17:03.210 "ddgst": ${ddgst:-false} 00:17:03.210 }, 00:17:03.210 "method": "bdev_nvme_attach_controller" 00:17:03.210 } 00:17:03.210 EOF 00:17:03.210 )") 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.210 "adrfam": "ipv4", 00:17:03.210 "trsvcid": "$NVMF_PORT", 00:17:03.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.210 "hdgst": ${hdgst:-false}, 00:17:03.210 "ddgst": ${ddgst:-false} 00:17:03.210 }, 00:17:03.210 "method": "bdev_nvme_attach_controller" 00:17:03.210 } 00:17:03.210 EOF 00:17:03.210 )") 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.210 "adrfam": "ipv4", 00:17:03.210 "trsvcid": "$NVMF_PORT", 00:17:03.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.210 "hdgst": ${hdgst:-false}, 00:17:03.210 "ddgst": ${ddgst:-false} 00:17:03.210 }, 00:17:03.210 "method": "bdev_nvme_attach_controller" 00:17:03.210 } 00:17:03.210 EOF 00:17:03.210 )") 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.210 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.210 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.210 { 00:17:03.210 "params": { 00:17:03.210 "name": "Nvme$subsystem", 00:17:03.210 "trtype": "$TEST_TRANSPORT", 00:17:03.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "$NVMF_PORT", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.211 "hdgst": ${hdgst:-false}, 00:17:03.211 "ddgst": ${ddgst:-false} 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 } 00:17:03.211 EOF 00:17:03.211 )") 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.211 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.211 { 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme$subsystem", 00:17:03.211 "trtype": "$TEST_TRANSPORT", 00:17:03.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "$NVMF_PORT", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.211 "hdgst": ${hdgst:-false}, 00:17:03.211 "ddgst": ${ddgst:-false} 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 } 00:17:03.211 EOF 00:17:03.211 )") 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.211 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.211 { 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme$subsystem", 00:17:03.211 "trtype": "$TEST_TRANSPORT", 00:17:03.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "$NVMF_PORT", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.211 "hdgst": ${hdgst:-false}, 00:17:03.211 "ddgst": ${ddgst:-false} 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 } 00:17:03.211 EOF 00:17:03.211 )") 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.211 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.211 { 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme$subsystem", 00:17:03.211 "trtype": "$TEST_TRANSPORT", 00:17:03.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "$NVMF_PORT", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.211 "hdgst": ${hdgst:-false}, 00:17:03.211 "ddgst": ${ddgst:-false} 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 } 00:17:03.211 EOF 00:17:03.211 )") 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.211 16:13:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:03.211 { 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme$subsystem", 00:17:03.211 "trtype": "$TEST_TRANSPORT", 00:17:03.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "$NVMF_PORT", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.211 "hdgst": ${hdgst:-false}, 00:17:03.211 "ddgst": ${ddgst:-false} 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 } 00:17:03.211 EOF 00:17:03.211 )") 00:17:03.211 16:13:04 -- nvmf/common.sh@543 -- # cat 00:17:03.211 16:13:04 -- nvmf/common.sh@545 -- # jq . 00:17:03.211 16:13:04 -- nvmf/common.sh@546 -- # IFS=, 00:17:03.211 16:13:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme1", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme2", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme3", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme4", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme5", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme6", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme7", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme8", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme9", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 },{ 00:17:03.211 "params": { 00:17:03.211 "name": "Nvme10", 00:17:03.211 "trtype": "tcp", 00:17:03.211 "traddr": "10.0.0.2", 00:17:03.211 "adrfam": "ipv4", 00:17:03.211 "trsvcid": "4420", 00:17:03.211 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:03.211 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:03.211 "hdgst": false, 00:17:03.211 "ddgst": false 00:17:03.211 }, 00:17:03.211 "method": "bdev_nvme_attach_controller" 00:17:03.211 }' 00:17:03.211 [2024-04-24 16:13:04.319351] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:03.211 [2024-04-24 16:13:04.319425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424448 ] 00:17:03.211 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.211 [2024-04-24 16:13:04.382189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.211 [2024-04-24 16:13:04.485702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.108 Running I/O for 10 seconds... 00:17:05.367 16:13:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:05.367 16:13:06 -- common/autotest_common.sh@850 -- # return 0 00:17:05.367 16:13:06 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:05.367 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.367 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.367 16:13:06 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:05.367 16:13:06 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:05.367 16:13:06 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:05.367 16:13:06 -- target/shutdown.sh@57 -- # local ret=1 00:17:05.367 16:13:06 -- target/shutdown.sh@58 -- # local i 00:17:05.367 16:13:06 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:05.367 16:13:06 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:05.367 16:13:06 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:05.367 16:13:06 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:05.367 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.367 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.367 16:13:06 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:05.367 16:13:06 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:05.367 16:13:06 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:05.626 16:13:06 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:05.626 16:13:06 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:05.626 16:13:06 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:05.626 16:13:06 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:05.626 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.626 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:05.626 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.626 16:13:06 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:05.626 16:13:06 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:05.626 16:13:06 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:05.884 16:13:07 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:05.884 16:13:07 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:05.884 16:13:07 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:05.884 16:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.884 16:13:07 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:05.884 16:13:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.884 16:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.884 16:13:07 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:05.884 16:13:07 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:05.884 16:13:07 -- target/shutdown.sh@64 -- # ret=0 00:17:05.884 16:13:07 -- target/shutdown.sh@65 -- # break 00:17:05.884 16:13:07 -- target/shutdown.sh@69 -- # return 0 00:17:05.884 16:13:07 -- target/shutdown.sh@110 -- # killprocess 3424448 00:17:05.884 16:13:07 -- common/autotest_common.sh@936 -- # '[' -z 3424448 ']' 00:17:05.884 16:13:07 -- common/autotest_common.sh@940 -- # kill -0 3424448 00:17:05.884 16:13:07 -- common/autotest_common.sh@941 -- # uname 00:17:05.884 16:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.884 16:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3424448 00:17:05.884 16:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.884 16:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.884 16:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3424448' 00:17:05.884 killing process with pid 3424448 00:17:05.884 16:13:07 -- common/autotest_common.sh@955 -- # kill 3424448 00:17:05.884 16:13:07 -- common/autotest_common.sh@960 -- # wait 3424448 00:17:06.142 Received shutdown signal, test time was about 0.946046 seconds 00:17:06.142 00:17:06.142 Latency(us) 00:17:06.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.142 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme1n1 : 0.90 212.80 13.30 0.00 0.00 297191.10 25243.50 273406.48 00:17:06.142 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme2n1 : 0.92 213.85 13.37 0.00 0.00 288314.48 4733.16 265639.25 00:17:06.142 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme3n1 : 0.90 216.64 13.54 0.00 0.00 277736.42 6941.96 284280.60 00:17:06.142 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme4n1 : 0.91 210.34 13.15 0.00 0.00 282296.83 36311.80 271853.04 00:17:06.142 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme5n1 : 0.91 209.84 13.12 0.00 0.00 276739.16 18738.44 287387.50 00:17:06.142 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme6n1 : 0.94 209.26 13.08 0.00 0.00 272032.67 1868.99 310689.19 00:17:06.142 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.142 Verification LBA range: start 0x0 length 0x400 00:17:06.142 Nvme7n1 : 0.92 207.85 12.99 0.00 0.00 268006.34 34564.17 264085.81 00:17:06.142 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.143 Verification LBA range: start 0x0 length 0x400 00:17:06.143 Nvme8n1 : 0.94 204.44 12.78 0.00 0.00 267191.44 21262.79 299815.06 00:17:06.143 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.143 Verification LBA range: start 0x0 length 0x400 00:17:06.143 Nvme9n1 : 0.93 210.30 13.14 0.00 0.00 252882.52 2075.31 264085.81 00:17:06.143 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.143 Verification LBA range: start 0x0 length 0x400 00:17:06.143 Nvme10n1 : 0.95 203.12 12.70 0.00 0.00 257580.50 19029.71 327777.09 00:17:06.143 =================================================================================================================== 00:17:06.143 Total : 2098.43 131.15 0.00 0.00 273994.16 1868.99 327777.09 00:17:06.401 16:13:07 -- target/shutdown.sh@113 -- # sleep 1 00:17:07.356 16:13:08 -- target/shutdown.sh@114 -- # kill -0 3424265 00:17:07.357 16:13:08 -- target/shutdown.sh@116 -- # stoptarget 00:17:07.357 16:13:08 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:07.357 16:13:08 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:07.357 16:13:08 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:07.357 16:13:08 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:07.357 16:13:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:07.357 16:13:08 -- nvmf/common.sh@117 -- # sync 00:17:07.357 16:13:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.357 16:13:08 -- nvmf/common.sh@120 -- # set +e 00:17:07.357 16:13:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.357 16:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.357 rmmod nvme_tcp 00:17:07.357 rmmod nvme_fabrics 00:17:07.357 rmmod nvme_keyring 00:17:07.357 16:13:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.357 16:13:08 -- nvmf/common.sh@124 -- # set -e 00:17:07.357 16:13:08 -- nvmf/common.sh@125 -- # return 0 00:17:07.357 16:13:08 -- nvmf/common.sh@478 -- # '[' -n 3424265 ']' 00:17:07.357 16:13:08 -- nvmf/common.sh@479 -- # killprocess 3424265 00:17:07.357 16:13:08 -- common/autotest_common.sh@936 -- # '[' -z 3424265 ']' 00:17:07.357 16:13:08 -- common/autotest_common.sh@940 -- # kill -0 3424265 00:17:07.357 16:13:08 -- common/autotest_common.sh@941 -- # uname 00:17:07.357 16:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.357 16:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3424265 00:17:07.357 16:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:07.357 16:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:07.357 16:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3424265' 00:17:07.357 killing process with pid 3424265 00:17:07.357 16:13:08 -- common/autotest_common.sh@955 -- # kill 3424265 00:17:07.357 16:13:08 -- common/autotest_common.sh@960 -- # wait 3424265 00:17:07.924 16:13:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:07.924 16:13:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:07.924 16:13:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:07.924 16:13:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.924 16:13:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.924 16:13:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.924 16:13:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.924 16:13:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.829 16:13:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.829 00:17:09.829 real 0m8.543s 00:17:09.829 user 0m26.653s 00:17:09.829 sys 0m1.603s 00:17:09.829 16:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:09.829 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.829 ************************************ 00:17:09.829 END TEST nvmf_shutdown_tc2 00:17:09.829 ************************************ 00:17:10.133 16:13:11 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:10.133 16:13:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:10.133 16:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.133 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.133 ************************************ 00:17:10.133 START TEST nvmf_shutdown_tc3 00:17:10.133 ************************************ 00:17:10.133 16:13:11 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:17:10.133 16:13:11 -- target/shutdown.sh@121 -- # starttarget 00:17:10.133 16:13:11 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:10.133 16:13:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:10.133 16:13:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.133 16:13:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:10.133 16:13:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:10.133 16:13:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:10.133 16:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.133 16:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.133 16:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.133 16:13:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:10.133 16:13:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.133 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.133 16:13:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.133 16:13:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.133 16:13:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.133 16:13:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.133 16:13:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.133 16:13:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.133 16:13:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.133 16:13:11 -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.133 16:13:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.133 16:13:11 -- nvmf/common.sh@296 -- # e810=() 00:17:10.133 16:13:11 -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.133 16:13:11 -- nvmf/common.sh@297 -- # x722=() 00:17:10.133 16:13:11 -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.133 16:13:11 -- nvmf/common.sh@298 -- # mlx=() 00:17:10.133 16:13:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.133 16:13:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.133 16:13:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.133 16:13:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.133 16:13:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.133 16:13:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.133 16:13:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:10.133 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:10.133 16:13:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.133 16:13:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:10.133 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:10.133 16:13:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.133 16:13:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.134 16:13:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.134 16:13:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.134 16:13:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.134 16:13:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.134 16:13:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:10.134 Found net devices under 0000:09:00.0: cvl_0_0 00:17:10.134 16:13:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.134 16:13:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.134 16:13:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.134 16:13:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.134 16:13:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.134 16:13:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:10.134 Found net devices under 0000:09:00.1: cvl_0_1 00:17:10.134 16:13:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.134 16:13:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:10.134 16:13:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:10.134 16:13:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:10.134 16:13:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.134 16:13:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.134 16:13:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.134 16:13:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.134 16:13:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.134 16:13:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.134 16:13:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.134 16:13:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.134 16:13:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.134 16:13:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.134 16:13:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.134 16:13:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.134 16:13:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.134 16:13:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.134 16:13:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.134 16:13:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.134 16:13:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.134 16:13:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.134 16:13:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.134 16:13:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:10.134 00:17:10.134 --- 10.0.0.2 ping statistics --- 00:17:10.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.134 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:10.134 16:13:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:17:10.134 00:17:10.134 --- 10.0.0.1 ping statistics --- 00:17:10.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.134 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:17:10.134 16:13:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.134 16:13:11 -- nvmf/common.sh@411 -- # return 0 00:17:10.134 16:13:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.134 16:13:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.134 16:13:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.134 16:13:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.134 16:13:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.134 16:13:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.134 16:13:11 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:10.134 16:13:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:10.134 16:13:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.134 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.134 16:13:11 -- nvmf/common.sh@470 -- # nvmfpid=3425372 00:17:10.134 16:13:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:10.134 16:13:11 -- nvmf/common.sh@471 -- # waitforlisten 3425372 00:17:10.134 16:13:11 -- common/autotest_common.sh@817 -- # '[' -z 3425372 ']' 00:17:10.134 16:13:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.134 16:13:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.134 16:13:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.134 16:13:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.134 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.393 [2024-04-24 16:13:11.457365] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:10.393 [2024-04-24 16:13:11.457448] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.393 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.393 [2024-04-24 16:13:11.526538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.393 [2024-04-24 16:13:11.645349] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.393 [2024-04-24 16:13:11.645407] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.393 [2024-04-24 16:13:11.645423] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.393 [2024-04-24 16:13:11.645437] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.393 [2024-04-24 16:13:11.645448] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.393 [2024-04-24 16:13:11.645563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.393 [2024-04-24 16:13:11.648763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.393 [2024-04-24 16:13:11.648900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:10.393 [2024-04-24 16:13:11.648906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.329 16:13:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.329 16:13:12 -- common/autotest_common.sh@850 -- # return 0 00:17:11.329 16:13:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:11.329 16:13:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.329 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 16:13:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.329 16:13:12 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.329 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.329 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 [2024-04-24 16:13:12.420353] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.329 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.329 16:13:12 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:11.329 16:13:12 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:11.329 16:13:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:11.329 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 16:13:12 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:11.329 16:13:12 -- target/shutdown.sh@28 -- # cat 00:17:11.329 16:13:12 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:11.329 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.329 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 Malloc1 00:17:11.329 [2024-04-24 16:13:12.509438] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.329 Malloc2 00:17:11.329 Malloc3 00:17:11.586 Malloc4 00:17:11.586 Malloc5 00:17:11.586 Malloc6 00:17:11.586 Malloc7 00:17:11.586 Malloc8 00:17:11.844 Malloc9 00:17:11.844 Malloc10 00:17:11.844 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.844 16:13:12 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:11.844 16:13:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.844 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.844 16:13:12 -- target/shutdown.sh@125 -- # perfpid=3425685 00:17:11.844 16:13:12 -- target/shutdown.sh@126 -- # waitforlisten 3425685 /var/tmp/bdevperf.sock 00:17:11.844 16:13:12 -- common/autotest_common.sh@817 -- # '[' -z 3425685 ']' 00:17:11.844 16:13:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.844 16:13:12 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:11.844 16:13:12 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:11.844 16:13:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.844 16:13:12 -- nvmf/common.sh@521 -- # config=() 00:17:11.844 16:13:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.844 16:13:12 -- nvmf/common.sh@521 -- # local subsystem config 00:17:11.845 16:13:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.845 16:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:12 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:11.845 { 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme$subsystem", 00:17:11.845 "trtype": "$TEST_TRANSPORT", 00:17:11.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "$NVMF_PORT", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.845 "hdgst": ${hdgst:-false}, 00:17:11.845 "ddgst": ${ddgst:-false} 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 } 00:17:11.845 EOF 00:17:11.845 )") 00:17:11.845 16:13:13 -- nvmf/common.sh@543 -- # cat 00:17:11.845 16:13:13 -- nvmf/common.sh@545 -- # jq . 00:17:11.845 16:13:13 -- nvmf/common.sh@546 -- # IFS=, 00:17:11.845 16:13:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme1", 00:17:11.845 "trtype": "tcp", 00:17:11.845 "traddr": "10.0.0.2", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "4420", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.845 "hdgst": false, 00:17:11.845 "ddgst": false 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 },{ 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme2", 00:17:11.845 "trtype": "tcp", 00:17:11.845 "traddr": "10.0.0.2", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "4420", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:11.845 "hdgst": false, 00:17:11.845 "ddgst": false 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 },{ 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme3", 00:17:11.845 "trtype": "tcp", 00:17:11.845 "traddr": "10.0.0.2", 00:17:11.845 "adrfam": "ipv4", 00:17:11.845 "trsvcid": "4420", 00:17:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:11.845 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:11.845 "hdgst": false, 00:17:11.845 "ddgst": false 00:17:11.845 }, 00:17:11.845 "method": "bdev_nvme_attach_controller" 00:17:11.845 },{ 00:17:11.845 "params": { 00:17:11.845 "name": "Nvme4", 00:17:11.845 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme5", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme6", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme7", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme8", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme9", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 },{ 00:17:11.846 "params": { 00:17:11.846 "name": "Nvme10", 00:17:11.846 "trtype": "tcp", 00:17:11.846 "traddr": "10.0.0.2", 00:17:11.846 "adrfam": "ipv4", 00:17:11.846 "trsvcid": "4420", 00:17:11.846 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:11.846 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:11.846 "hdgst": false, 00:17:11.846 "ddgst": false 00:17:11.846 }, 00:17:11.846 "method": "bdev_nvme_attach_controller" 00:17:11.846 }' 00:17:11.846 [2024-04-24 16:13:13.027158] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:11.846 [2024-04-24 16:13:13.027231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425685 ] 00:17:11.846 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.846 [2024-04-24 16:13:13.089703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.104 [2024-04-24 16:13:13.194044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.008 Running I/O for 10 seconds... 00:17:14.008 16:13:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:14.008 16:13:14 -- common/autotest_common.sh@850 -- # return 0 00:17:14.008 16:13:14 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:14.008 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.008 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:17:14.008 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.008 16:13:15 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.008 16:13:15 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:14.008 16:13:15 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:14.008 16:13:15 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:14.008 16:13:15 -- target/shutdown.sh@57 -- # local ret=1 00:17:14.008 16:13:15 -- target/shutdown.sh@58 -- # local i 00:17:14.008 16:13:15 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:14.008 16:13:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:14.008 16:13:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.008 16:13:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.008 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.008 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:14.008 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.008 16:13:15 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:14.008 16:13:15 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:14.008 16:13:15 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:14.268 16:13:15 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:14.268 16:13:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:14.268 16:13:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.268 16:13:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.268 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.268 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:14.268 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.268 16:13:15 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:14.268 16:13:15 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:14.268 16:13:15 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:14.527 16:13:15 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:14.527 16:13:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:14.527 16:13:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.527 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.527 16:13:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.527 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:14.527 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.527 16:13:15 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:14.527 16:13:15 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:14.527 16:13:15 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:14.786 16:13:16 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:14.786 16:13:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:14.786 16:13:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.786 16:13:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.786 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.786 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:17:14.786 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.065 16:13:16 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:15.065 16:13:16 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:15.065 16:13:16 -- target/shutdown.sh@64 -- # ret=0 00:17:15.065 16:13:16 -- target/shutdown.sh@65 -- # break 00:17:15.065 16:13:16 -- target/shutdown.sh@69 -- # return 0 00:17:15.065 16:13:16 -- target/shutdown.sh@135 -- # killprocess 3425372 00:17:15.065 16:13:16 -- common/autotest_common.sh@936 -- # '[' -z 3425372 ']' 00:17:15.065 16:13:16 -- common/autotest_common.sh@940 -- # kill -0 3425372 00:17:15.065 16:13:16 -- common/autotest_common.sh@941 -- # uname 00:17:15.065 16:13:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.065 16:13:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3425372 00:17:15.065 16:13:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:15.065 16:13:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:15.065 16:13:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3425372' 00:17:15.065 killing process with pid 3425372 00:17:15.065 16:13:16 -- common/autotest_common.sh@955 -- # kill 3425372 00:17:15.065 16:13:16 -- common/autotest_common.sh@960 -- # wait 3425372 00:17:15.065 [2024-04-24 16:13:16.109797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.109987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.065 [2024-04-24 16:13:16.110482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.110690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000dd0 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.066 [2024-04-24 16:13:16.112790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.066 [2024-04-24 16:13:16.112815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-04-24 16:13:16.112828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with id:0 cdw10:00000000 cdw11:00000000 00:17:15.066 the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with [2024-04-24 16:13:16.112842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:17:15.066 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.066 [2024-04-24 16:13:16.112856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.066 [2024-04-24 16:13:16.112869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.066 [2024-04-24 16:13:16.112881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.066 [2024-04-24 16:13:16.112898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.066 [2024-04-24 16:13:16.112901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.066 [2024-04-24 16:13:16.112911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9870 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.112979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003720 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.115997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.116491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001280 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.067 [2024-04-24 16:13:16.118450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.118987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.119003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001710 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.068 [2024-04-24 16:13:16.120640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.120967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001ba0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.121686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002030 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.121714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002030 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.121729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002030 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.122985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.123007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.123020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.069 [2024-04-24 16:13:16.123032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.123344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10024e0 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.124985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.070 [2024-04-24 16:13:16.125398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.125411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.125423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.125435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002970 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.126990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002e00 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.127984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.071 [2024-04-24 16:13:16.128139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.128685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003290 is same with the state(5) to be set 00:17:15.072 [2024-04-24 16:13:16.136306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.072 [2024-04-24 16:13:16.136928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.072 [2024-04-24 16:13:16.136943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.136972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.136988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.137983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.137997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.138023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.138037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.138056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.138071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.138087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.138102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.138118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.138132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.073 [2024-04-24 16:13:16.138147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.073 [2024-04-24 16:13:16.138161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:15.074 [2024-04-24 16:13:16.138437] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b36f0 was disconnected and freed. reset controller. 00:17:15.074 [2024-04-24 16:13:16.138801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.138975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.138989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.074 [2024-04-24 16:13:16.139612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.074 [2024-04-24 16:13:16.139627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.139968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.139981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.075 [2024-04-24 16:13:16.140733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.140783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:15.075 [2024-04-24 16:13:16.140868] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b6040 was disconnected and freed. reset controller. 00:17:15.075 [2024-04-24 16:13:16.141636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.075 [2024-04-24 16:13:16.141666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.141682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.075 [2024-04-24 16:13:16.141696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.075 [2024-04-24 16:13:16.141709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.075 [2024-04-24 16:13:16.141722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.141758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991150 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.141830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.141850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.141878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.141930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.141943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b6400 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.141991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4c00 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.142166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa20 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.142396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f8f0 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.142549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff380 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.142707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a01f0 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.142872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9870 (9): Bad file descriptor 00:17:15.076 [2024-04-24 16:13:16.142922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.142984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.142997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.143023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6cc0 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.143079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.143098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.143126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.143158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.076 [2024-04-24 16:13:16.143185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9560 is same with the state(5) to be set 00:17:15.076 [2024-04-24 16:13:16.143390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.076 [2024-04-24 16:13:16.143417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.076 [2024-04-24 16:13:16.143450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.076 [2024-04-24 16:13:16.143475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.143973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.143988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.077 [2024-04-24 16:13:16.144669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.077 [2024-04-24 16:13:16.144683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.144983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.144997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.078 [2024-04-24 16:13:16.145360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.078 [2024-04-24 16:13:16.145374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0f50 is same with the state(5) to be set 00:17:15.078 [2024-04-24 16:13:16.145449] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b0f50 was disconnected and freed. reset controller. 00:17:15.078 [2024-04-24 16:13:16.148382] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.078 [2024-04-24 16:13:16.148432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:15.078 [2024-04-24 16:13:16.148462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:15.078 [2024-04-24 16:13:16.148487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181f8f0 (9): Bad file descriptor 00:17:15.078 [2024-04-24 16:13:16.148508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff380 (9): Bad file descriptor 00:17:15.078 [2024-04-24 16:13:16.150288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:15.078 [2024-04-24 16:13:16.151675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.151832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.151859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff380 with addr=10.0.0.2, port=4420 00:17:15.078 [2024-04-24 16:13:16.151877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff380 is same with the state(5) to be set 00:17:15.078 [2024-04-24 16:13:16.151992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.152125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.152150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181f8f0 with addr=10.0.0.2, port=4420 00:17:15.078 [2024-04-24 16:13:16.152166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f8f0 is same with the state(5) to be set 00:17:15.078 [2024-04-24 16:13:16.152288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.152426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.078 [2024-04-24 16:13:16.152450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e9870 with addr=10.0.0.2, port=4420 00:17:15.078 [2024-04-24 16:13:16.152465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9870 is same with the state(5) to be set 00:17:15.078 [2024-04-24 16:13:16.152804] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.152889] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.152977] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.153055] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.153115] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.153183] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:15.079 [2024-04-24 16:13:16.153421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff380 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181f8f0 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9870 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1991150 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b6400 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b4c00 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bfa20 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a01f0 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6cc0 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9560 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.153881] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.153962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.153980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:15.079 [2024-04-24 16:13:16.154001] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.154015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.154027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:15.079 [2024-04-24 16:13:16.154044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.154057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.154069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.079 [2024-04-24 16:13:16.154133] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.154154] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.154166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.160502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:15.079 [2024-04-24 16:13:16.160561] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:15.079 [2024-04-24 16:13:16.160641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:15.079 [2024-04-24 16:13:16.160911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e9870 with addr=10.0.0.2, port=4420 00:17:15.079 [2024-04-24 16:13:16.161106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9870 is same with the state(5) to be set 00:17:15.079 [2024-04-24 16:13:16.161302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181f8f0 with addr=10.0.0.2, port=4420 00:17:15.079 [2024-04-24 16:13:16.161479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f8f0 is same with the state(5) to be set 00:17:15.079 [2024-04-24 16:13:16.161641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.079 [2024-04-24 16:13:16.161814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff380 with addr=10.0.0.2, port=4420 00:17:15.079 [2024-04-24 16:13:16.161829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff380 is same with the state(5) to be set 00:17:15.079 [2024-04-24 16:13:16.161852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9870 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.161871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181f8f0 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.161943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff380 (9): Bad file descriptor 00:17:15.079 [2024-04-24 16:13:16.161966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.161979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.161994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.079 [2024-04-24 16:13:16.162021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.162035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.162047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:15.079 [2024-04-24 16:13:16.162102] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.162121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.162134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:15.079 [2024-04-24 16:13:16.162146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:15.079 [2024-04-24 16:13:16.162159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:15.079 [2024-04-24 16:13:16.162212] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.079 [2024-04-24 16:13:16.163609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.163975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.163991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.164024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.164037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.164053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.164066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.079 [2024-04-24 16:13:16.164082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.079 [2024-04-24 16:13:16.164095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.164977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.080 [2024-04-24 16:13:16.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.080 [2024-04-24 16:13:16.165298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.165572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.165587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2290 is same with the state(5) to be set 00:17:15.081 [2024-04-24 16:13:16.166897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.166925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.166979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.081 [2024-04-24 16:13:16.167759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.081 [2024-04-24 16:13:16.167776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.167966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.167983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.168843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.168857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4be0 is same with the state(5) to be set 00:17:15.082 [2024-04-24 16:13:16.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.170137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.170157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.170172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.170188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.082 [2024-04-24 16:13:16.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.082 [2024-04-24 16:13:16.170217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.170977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.170992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.083 [2024-04-24 16:13:16.171213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.083 [2024-04-24 16:13:16.171229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.171974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.171989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.172007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.172021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f58e0 is same with the state(5) to be set 00:17:15.084 [2024-04-24 16:13:16.173260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.084 [2024-04-24 16:13:16.173611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.084 [2024-04-24 16:13:16.173626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.173975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.173991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.085 [2024-04-24 16:13:16.174801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.085 [2024-04-24 16:13:16.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.174975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.174988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.175170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.175184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6d20 is same with the state(5) to be set 00:17:15.086 [2024-04-24 16:13:16.176405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.176974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.176988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.086 [2024-04-24 16:13:16.177178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.086 [2024-04-24 16:13:16.177194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.177979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.177991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.087 [2024-04-24 16:13:16.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.087 [2024-04-24 16:13:16.178243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.178257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.178286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.178301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.178329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8180 is same with the state(5) to be set 00:17:15.088 [2024-04-24 16:13:16.179550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.179977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.088 [2024-04-24 16:13:16.180507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.088 [2024-04-24 16:13:16.180521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.180972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.181458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.181472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a73e0 is same with the state(5) to be set 00:17:15.089 [2024-04-24 16:13:16.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.089 [2024-04-24 16:13:16.183802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.089 [2024-04-24 16:13:16.183825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.183886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.183919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.183949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.090 [2024-04-24 16:13:16.184983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.090 [2024-04-24 16:13:16.184996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.091 [2024-04-24 16:13:16.185678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.091 [2024-04-24 16:13:16.185692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264dd70 is same with the state(5) to be set 00:17:15.091 [2024-04-24 16:13:16.187345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:15.091 [2024-04-24 16:13:16.187375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:15.091 [2024-04-24 16:13:16.187394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:15.091 [2024-04-24 16:13:16.187411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:15.091 [2024-04-24 16:13:16.187532] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.091 [2024-04-24 16:13:16.187563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.091 [2024-04-24 16:13:16.187583] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.091 [2024-04-24 16:13:16.187695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:15.091 [2024-04-24 16:13:16.187721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:15.091 task offset: 33280 on job bdev=Nvme3n1 fails 00:17:15.091 00:17:15.091 Latency(us) 00:17:15.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.091 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme1n1 ended in about 1.18 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme1n1 : 1.18 161.39 10.09 54.36 0.00 294014.30 14272.28 320009.86 00:17:15.091 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme2n1 ended in about 1.19 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme2n1 : 1.19 164.11 10.26 53.59 0.00 286733.07 19126.80 257872.02 00:17:15.091 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme3n1 ended in about 1.17 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme3n1 : 1.17 221.41 13.84 54.50 0.00 222436.44 10000.31 276513.37 00:17:15.091 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme4n1 ended in about 1.20 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme4n1 : 1.20 160.33 10.02 53.44 0.00 282807.56 22039.51 299815.06 00:17:15.091 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme5n1 ended in about 1.18 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme5n1 : 1.18 163.34 10.21 54.45 0.00 272602.36 8252.68 309135.74 00:17:15.091 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme6n1 ended in about 1.20 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme6n1 : 1.20 163.24 10.20 53.30 0.00 270075.54 20583.16 285834.05 00:17:15.091 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme7n1 ended in about 1.20 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme7n1 : 1.20 159.49 9.97 53.16 0.00 270529.42 21068.61 299815.06 00:17:15.091 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme8n1 ended in about 1.21 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme8n1 : 1.21 106.05 6.63 53.02 0.00 355966.42 24466.77 329330.54 00:17:15.091 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme9n1 ended in about 1.21 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme9n1 : 1.21 158.66 9.92 52.89 0.00 263310.03 22330.79 279620.27 00:17:15.091 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.091 Job: Nvme10n1 ended in about 1.21 seconds with error 00:17:15.091 Verification LBA range: start 0x0 length 0x400 00:17:15.091 Nvme10n1 : 1.21 158.11 9.88 52.70 0.00 260041.96 16602.45 281173.71 00:17:15.092 =================================================================================================================== 00:17:15.092 Total : 1616.12 101.01 535.42 0.00 274429.34 8252.68 329330.54 00:17:15.092 [2024-04-24 16:13:16.215483] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:15.092 [2024-04-24 16:13:16.215570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:15.092 [2024-04-24 16:13:16.215981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6cc0 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.216190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6cc0 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.216323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e9560 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.216502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9560 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.216634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.216802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a01f0 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.216817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a01f0 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.216941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.217069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.217093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1991150 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.217108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991150 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.219065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:15.092 [2024-04-24 16:13:16.219094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:15.092 [2024-04-24 16:13:16.219270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b4c00 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.219433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4c00 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.219563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bfa20 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.219712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa20 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.219840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.219975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b6400 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.219991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b6400 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.220014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6cc0 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.220036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9560 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.220060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a01f0 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.220078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1991150 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.220131] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.092 [2024-04-24 16:13:16.220157] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.092 [2024-04-24 16:13:16.220176] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.092 [2024-04-24 16:13:16.220197] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.092 [2024-04-24 16:13:16.220225] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:15.092 [2024-04-24 16:13:16.220315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:15.092 [2024-04-24 16:13:16.220478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.220610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.220634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181f8f0 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.220650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f8f0 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.220777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.220971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.092 [2024-04-24 16:13:16.220996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e9870 with addr=10.0.0.2, port=4420 00:17:15.092 [2024-04-24 16:13:16.221011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9870 is same with the state(5) to be set 00:17:15.092 [2024-04-24 16:13:16.221029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b4c00 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.221047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bfa20 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.221064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b6400 (9): Bad file descriptor 00:17:15.092 [2024-04-24 16:13:16.221079] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:15.092 [2024-04-24 16:13:16.221092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:15.092 [2024-04-24 16:13:16.221107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:15.092 [2024-04-24 16:13:16.221126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221342] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221363] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221387] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.093 [2024-04-24 16:13:16.221628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.093 [2024-04-24 16:13:16.221650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff380 with addr=10.0.0.2, port=4420 00:17:15.093 [2024-04-24 16:13:16.221665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff380 is same with the state(5) to be set 00:17:15.093 [2024-04-24 16:13:16.221683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181f8f0 (9): Bad file descriptor 00:17:15.093 [2024-04-24 16:13:16.221701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9870 (9): Bad file descriptor 00:17:15.093 [2024-04-24 16:13:16.221715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221740] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221867] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221896] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.221911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff380 (9): Bad file descriptor 00:17:15.093 [2024-04-24 16:13:16.221927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:15.093 [2024-04-24 16:13:16.221969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.221983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.221995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.093 [2024-04-24 16:13:16.222032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.222067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.093 [2024-04-24 16:13:16.222080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:15.093 [2024-04-24 16:13:16.222092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:15.093 [2024-04-24 16:13:16.222105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:15.093 [2024-04-24 16:13:16.222142] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.661 16:13:16 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:15.661 16:13:16 -- target/shutdown.sh@139 -- # sleep 1 00:17:16.599 16:13:17 -- target/shutdown.sh@142 -- # kill -9 3425685 00:17:16.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3425685) - No such process 00:17:16.599 16:13:17 -- target/shutdown.sh@142 -- # true 00:17:16.599 16:13:17 -- target/shutdown.sh@144 -- # stoptarget 00:17:16.599 16:13:17 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:16.599 16:13:17 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:16.599 16:13:17 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:16.599 16:13:17 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:16.599 16:13:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:16.599 16:13:17 -- nvmf/common.sh@117 -- # sync 00:17:16.599 16:13:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.599 16:13:17 -- nvmf/common.sh@120 -- # set +e 00:17:16.599 16:13:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.599 16:13:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.599 rmmod nvme_tcp 00:17:16.599 rmmod nvme_fabrics 00:17:16.599 rmmod nvme_keyring 00:17:16.599 16:13:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.599 16:13:17 -- nvmf/common.sh@124 -- # set -e 00:17:16.599 16:13:17 -- nvmf/common.sh@125 -- # return 0 00:17:16.599 16:13:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:16.599 16:13:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:16.599 16:13:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:16.599 16:13:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:16.599 16:13:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.599 16:13:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.599 16:13:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.599 16:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.599 16:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.501 16:13:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.501 00:17:18.501 real 0m8.555s 00:17:18.502 user 0m22.469s 00:17:18.502 sys 0m1.733s 00:17:18.502 16:13:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.502 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:17:18.502 ************************************ 00:17:18.502 END TEST nvmf_shutdown_tc3 00:17:18.502 ************************************ 00:17:18.761 16:13:19 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:18.761 00:17:18.761 real 0m29.262s 00:17:18.761 user 1m23.588s 00:17:18.761 sys 0m6.800s 00:17:18.761 16:13:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.761 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 ************************************ 00:17:18.761 END TEST nvmf_shutdown 00:17:18.761 ************************************ 00:17:18.761 16:13:19 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:18.761 16:13:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:18.761 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 16:13:19 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:18.761 16:13:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.761 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 16:13:19 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:18.761 16:13:19 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:18.761 16:13:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:18.761 16:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.761 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 ************************************ 00:17:18.762 START TEST nvmf_multicontroller 00:17:18.762 ************************************ 00:17:18.762 16:13:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:18.762 * Looking for test storage... 00:17:18.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:18.762 16:13:20 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.762 16:13:20 -- nvmf/common.sh@7 -- # uname -s 00:17:18.762 16:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.762 16:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.762 16:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.762 16:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.762 16:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.762 16:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.762 16:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.762 16:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.762 16:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.762 16:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.762 16:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.762 16:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.762 16:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.762 16:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.762 16:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.762 16:13:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.762 16:13:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.762 16:13:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.762 16:13:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.762 16:13:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.762 16:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.762 16:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.762 16:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.762 16:13:20 -- paths/export.sh@5 -- # export PATH 00:17:18.762 16:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.762 16:13:20 -- nvmf/common.sh@47 -- # : 0 00:17:18.762 16:13:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.762 16:13:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.762 16:13:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.762 16:13:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.762 16:13:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.762 16:13:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.762 16:13:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.762 16:13:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.762 16:13:20 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.762 16:13:20 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.762 16:13:20 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:18.762 16:13:20 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:18.762 16:13:20 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.762 16:13:20 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:18.762 16:13:20 -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:18.762 16:13:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:18.762 16:13:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.762 16:13:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:18.762 16:13:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:18.762 16:13:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:18.762 16:13:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.762 16:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.762 16:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.762 16:13:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:18.762 16:13:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:18.762 16:13:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.762 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 16:13:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.666 16:13:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.666 16:13:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.666 16:13:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.666 16:13:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.666 16:13:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.666 16:13:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.666 16:13:21 -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.666 16:13:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.666 16:13:21 -- nvmf/common.sh@296 -- # e810=() 00:17:20.666 16:13:21 -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.666 16:13:21 -- nvmf/common.sh@297 -- # x722=() 00:17:20.666 16:13:21 -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.666 16:13:21 -- nvmf/common.sh@298 -- # mlx=() 00:17:20.666 16:13:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.666 16:13:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.666 16:13:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.666 16:13:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.666 16:13:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.666 16:13:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.666 16:13:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:20.666 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:20.666 16:13:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.666 16:13:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:20.666 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:20.666 16:13:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.666 16:13:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.666 16:13:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.667 16:13:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.667 16:13:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.667 16:13:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.667 16:13:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:20.667 16:13:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.667 16:13:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:20.667 Found net devices under 0000:09:00.0: cvl_0_0 00:17:20.667 16:13:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.667 16:13:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.667 16:13:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.667 16:13:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:20.667 16:13:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.667 16:13:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:20.667 Found net devices under 0000:09:00.1: cvl_0_1 00:17:20.667 16:13:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.667 16:13:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:20.667 16:13:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:20.667 16:13:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:20.667 16:13:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:20.667 16:13:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:20.667 16:13:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.667 16:13:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.667 16:13:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.667 16:13:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.667 16:13:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.667 16:13:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.667 16:13:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.667 16:13:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.667 16:13:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.667 16:13:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.667 16:13:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.667 16:13:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.667 16:13:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.928 16:13:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.928 16:13:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.928 16:13:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.928 16:13:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.928 16:13:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.929 16:13:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.929 16:13:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:17:20.929 00:17:20.929 --- 10.0.0.2 ping statistics --- 00:17:20.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.929 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:20.929 16:13:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:20.929 00:17:20.929 --- 10.0.0.1 ping statistics --- 00:17:20.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.929 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:20.929 16:13:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.929 16:13:22 -- nvmf/common.sh@411 -- # return 0 00:17:20.929 16:13:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:20.929 16:13:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.929 16:13:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:20.929 16:13:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:20.929 16:13:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.929 16:13:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:20.929 16:13:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:20.929 16:13:22 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:20.929 16:13:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:20.929 16:13:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:20.929 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 16:13:22 -- nvmf/common.sh@470 -- # nvmfpid=3428219 00:17:20.929 16:13:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:20.929 16:13:22 -- nvmf/common.sh@471 -- # waitforlisten 3428219 00:17:20.929 16:13:22 -- common/autotest_common.sh@817 -- # '[' -z 3428219 ']' 00:17:20.929 16:13:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.929 16:13:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:20.929 16:13:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.929 16:13:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:20.929 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 [2024-04-24 16:13:22.127752] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:20.929 [2024-04-24 16:13:22.127849] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.929 [2024-04-24 16:13:22.191201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.188 [2024-04-24 16:13:22.295756] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.188 [2024-04-24 16:13:22.295826] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.188 [2024-04-24 16:13:22.295856] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.188 [2024-04-24 16:13:22.295868] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.188 [2024-04-24 16:13:22.295879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.188 [2024-04-24 16:13:22.296254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.188 [2024-04-24 16:13:22.296313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.188 [2024-04-24 16:13:22.296317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.188 16:13:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:21.188 16:13:22 -- common/autotest_common.sh@850 -- # return 0 00:17:21.188 16:13:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:21.188 16:13:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:21.188 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.188 16:13:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.188 16:13:22 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.188 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.188 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.188 [2024-04-24 16:13:22.443457] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.188 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.188 16:13:22 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.188 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.188 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.447 Malloc0 00:17:21.447 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.447 16:13:22 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.447 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.447 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.447 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 [2024-04-24 16:13:22.512253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 [2024-04-24 16:13:22.520144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 Malloc1 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:21.448 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.448 16:13:22 -- host/multicontroller.sh@44 -- # bdevperf_pid=3428249 00:17:21.448 16:13:22 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.448 16:13:22 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:21.448 16:13:22 -- host/multicontroller.sh@47 -- # waitforlisten 3428249 /var/tmp/bdevperf.sock 00:17:21.448 16:13:22 -- common/autotest_common.sh@817 -- # '[' -z 3428249 ']' 00:17:21.448 16:13:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.448 16:13:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.448 16:13:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.448 16:13:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.448 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.706 16:13:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:21.706 16:13:22 -- common/autotest_common.sh@850 -- # return 0 00:17:21.706 16:13:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:21.706 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.706 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:21.964 NVMe0n1 00:17:21.964 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.964 16:13:23 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:21.964 16:13:23 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:21.964 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.964 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.964 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.964 1 00:17:21.964 16:13:23 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:21.964 16:13:23 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.964 16:13:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:21.964 16:13:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:21.964 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.964 16:13:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:21.964 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.964 16:13:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:21.964 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.964 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.964 request: 00:17:21.964 { 00:17:21.964 "name": "NVMe0", 00:17:21.964 "trtype": "tcp", 00:17:21.965 "traddr": "10.0.0.2", 00:17:21.965 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:21.965 "hostaddr": "10.0.0.2", 00:17:21.965 "hostsvcid": "60000", 00:17:21.965 "adrfam": "ipv4", 00:17:21.965 "trsvcid": "4420", 00:17:21.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.965 "method": "bdev_nvme_attach_controller", 00:17:21.965 "req_id": 1 00:17:21.965 } 00:17:21.965 Got JSON-RPC error response 00:17:21.965 response: 00:17:21.965 { 00:17:21.965 "code": -114, 00:17:21.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:21.965 } 00:17:21.965 16:13:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # es=1 00:17:21.965 16:13:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:21.965 16:13:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:21.965 16:13:23 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:21.965 16:13:23 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.965 16:13:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:21.965 16:13:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:21.965 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.965 request: 00:17:21.965 { 00:17:21.965 "name": "NVMe0", 00:17:21.965 "trtype": "tcp", 00:17:21.965 "traddr": "10.0.0.2", 00:17:21.965 "hostaddr": "10.0.0.2", 00:17:21.965 "hostsvcid": "60000", 00:17:21.965 "adrfam": "ipv4", 00:17:21.965 "trsvcid": "4420", 00:17:21.965 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:21.965 "method": "bdev_nvme_attach_controller", 00:17:21.965 "req_id": 1 00:17:21.965 } 00:17:21.965 Got JSON-RPC error response 00:17:21.965 response: 00:17:21.965 { 00:17:21.965 "code": -114, 00:17:21.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:21.965 } 00:17:21.965 16:13:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # es=1 00:17:21.965 16:13:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:21.965 16:13:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:21.965 16:13:23 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.965 16:13:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.965 request: 00:17:21.965 { 00:17:21.965 "name": "NVMe0", 00:17:21.965 "trtype": "tcp", 00:17:21.965 "traddr": "10.0.0.2", 00:17:21.965 "hostaddr": "10.0.0.2", 00:17:21.965 "hostsvcid": "60000", 00:17:21.965 "adrfam": "ipv4", 00:17:21.965 "trsvcid": "4420", 00:17:21.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.965 "multipath": "disable", 00:17:21.965 "method": "bdev_nvme_attach_controller", 00:17:21.965 "req_id": 1 00:17:21.965 } 00:17:21.965 Got JSON-RPC error response 00:17:21.965 response: 00:17:21.965 { 00:17:21.965 "code": -114, 00:17:21.965 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:21.965 } 00:17:21.965 16:13:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # es=1 00:17:21.965 16:13:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:21.965 16:13:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:21.965 16:13:23 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:21.965 16:13:23 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.965 16:13:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:21.965 16:13:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:21.965 16:13:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:21.965 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.965 request: 00:17:21.965 { 00:17:21.965 "name": "NVMe0", 00:17:21.965 "trtype": "tcp", 00:17:21.965 "traddr": "10.0.0.2", 00:17:21.965 "hostaddr": "10.0.0.2", 00:17:21.965 "hostsvcid": "60000", 00:17:21.965 "adrfam": "ipv4", 00:17:21.965 "trsvcid": "4420", 00:17:21.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.965 "multipath": "failover", 00:17:21.965 "method": "bdev_nvme_attach_controller", 00:17:21.965 "req_id": 1 00:17:21.965 } 00:17:21.965 Got JSON-RPC error response 00:17:21.965 response: 00:17:21.965 { 00:17:21.965 "code": -114, 00:17:21.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:21.965 } 00:17:21.965 16:13:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@641 -- # es=1 00:17:21.965 16:13:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:21.965 16:13:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:21.965 16:13:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:21.965 16:13:23 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:21.965 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.965 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.224 00:17:22.224 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.224 16:13:23 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:22.224 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.224 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.224 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.224 16:13:23 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:22.224 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.224 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.224 00:17:22.224 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.224 16:13:23 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:22.224 16:13:23 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:22.224 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.224 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.224 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.484 16:13:23 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:22.484 16:13:23 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.421 0 00:17:23.421 16:13:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:23.421 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.421 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.421 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.421 16:13:24 -- host/multicontroller.sh@100 -- # killprocess 3428249 00:17:23.421 16:13:24 -- common/autotest_common.sh@936 -- # '[' -z 3428249 ']' 00:17:23.421 16:13:24 -- common/autotest_common.sh@940 -- # kill -0 3428249 00:17:23.421 16:13:24 -- common/autotest_common.sh@941 -- # uname 00:17:23.421 16:13:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.421 16:13:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3428249 00:17:23.421 16:13:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.421 16:13:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.421 16:13:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3428249' 00:17:23.421 killing process with pid 3428249 00:17:23.421 16:13:24 -- common/autotest_common.sh@955 -- # kill 3428249 00:17:23.421 16:13:24 -- common/autotest_common.sh@960 -- # wait 3428249 00:17:23.679 16:13:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.679 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.679 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.679 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.679 16:13:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:23.679 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.679 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.679 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.679 16:13:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:23.679 16:13:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:23.679 16:13:24 -- common/autotest_common.sh@1598 -- # read -r file 00:17:23.679 16:13:24 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:23.679 16:13:24 -- common/autotest_common.sh@1597 -- # sort -u 00:17:23.679 16:13:24 -- common/autotest_common.sh@1599 -- # cat 00:17:23.679 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:23.679 [2024-04-24 16:13:22.622814] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:23.679 [2024-04-24 16:13:22.622909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428249 ] 00:17:23.679 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.680 [2024-04-24 16:13:22.682552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.680 [2024-04-24 16:13:22.788491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.680 [2024-04-24 16:13:23.492812] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 06f04fbf-bc6b-4df0-b962-bc508366b1bf already exists 00:17:23.680 [2024-04-24 16:13:23.492852] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:06f04fbf-bc6b-4df0-b962-bc508366b1bf alias for bdev NVMe1n1 00:17:23.680 [2024-04-24 16:13:23.492886] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:23.680 Running I/O for 1 seconds... 00:17:23.680 00:17:23.680 Latency(us) 00:17:23.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.680 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:23.680 NVMe0n1 : 1.01 16814.97 65.68 0.00 0.00 7578.83 7233.23 16408.27 00:17:23.680 =================================================================================================================== 00:17:23.680 Total : 16814.97 65.68 0.00 0.00 7578.83 7233.23 16408.27 00:17:23.680 Received shutdown signal, test time was about 1.000000 seconds 00:17:23.680 00:17:23.680 Latency(us) 00:17:23.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.680 =================================================================================================================== 00:17:23.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.680 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:23.680 16:13:24 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:23.680 16:13:24 -- common/autotest_common.sh@1598 -- # read -r file 00:17:23.680 16:13:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:23.680 16:13:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.680 16:13:24 -- nvmf/common.sh@117 -- # sync 00:17:23.940 16:13:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.940 16:13:24 -- nvmf/common.sh@120 -- # set +e 00:17:23.940 16:13:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.940 16:13:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.940 rmmod nvme_tcp 00:17:23.940 rmmod nvme_fabrics 00:17:23.940 rmmod nvme_keyring 00:17:23.940 16:13:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.940 16:13:25 -- nvmf/common.sh@124 -- # set -e 00:17:23.940 16:13:25 -- nvmf/common.sh@125 -- # return 0 00:17:23.940 16:13:25 -- nvmf/common.sh@478 -- # '[' -n 3428219 ']' 00:17:23.940 16:13:25 -- nvmf/common.sh@479 -- # killprocess 3428219 00:17:23.940 16:13:25 -- common/autotest_common.sh@936 -- # '[' -z 3428219 ']' 00:17:23.940 16:13:25 -- common/autotest_common.sh@940 -- # kill -0 3428219 00:17:23.940 16:13:25 -- common/autotest_common.sh@941 -- # uname 00:17:23.940 16:13:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.940 16:13:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3428219 00:17:23.940 16:13:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:23.940 16:13:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:23.940 16:13:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3428219' 00:17:23.940 killing process with pid 3428219 00:17:23.940 16:13:25 -- common/autotest_common.sh@955 -- # kill 3428219 00:17:23.940 16:13:25 -- common/autotest_common.sh@960 -- # wait 3428219 00:17:24.230 16:13:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.230 16:13:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.230 16:13:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.230 16:13:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.230 16:13:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.230 16:13:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.230 16:13:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.230 16:13:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.187 16:13:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:26.187 00:17:26.187 real 0m7.457s 00:17:26.187 user 0m12.056s 00:17:26.187 sys 0m2.237s 00:17:26.187 16:13:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.187 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:17:26.187 ************************************ 00:17:26.187 END TEST nvmf_multicontroller 00:17:26.187 ************************************ 00:17:26.187 16:13:27 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:26.187 16:13:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:26.187 16:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.187 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:17:26.445 ************************************ 00:17:26.445 START TEST nvmf_aer 00:17:26.445 ************************************ 00:17:26.445 16:13:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:26.445 * Looking for test storage... 00:17:26.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:26.446 16:13:27 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.446 16:13:27 -- nvmf/common.sh@7 -- # uname -s 00:17:26.446 16:13:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.446 16:13:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.446 16:13:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.446 16:13:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.446 16:13:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.446 16:13:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.446 16:13:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.446 16:13:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.446 16:13:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.446 16:13:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.446 16:13:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.446 16:13:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.446 16:13:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.446 16:13:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.446 16:13:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.446 16:13:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.446 16:13:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.446 16:13:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.446 16:13:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.446 16:13:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.446 16:13:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.446 16:13:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.446 16:13:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.446 16:13:27 -- paths/export.sh@5 -- # export PATH 00:17:26.446 16:13:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.446 16:13:27 -- nvmf/common.sh@47 -- # : 0 00:17:26.446 16:13:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.446 16:13:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.446 16:13:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.446 16:13:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.446 16:13:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.446 16:13:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.446 16:13:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.446 16:13:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.446 16:13:27 -- host/aer.sh@11 -- # nvmftestinit 00:17:26.446 16:13:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:26.446 16:13:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.446 16:13:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:26.446 16:13:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:26.446 16:13:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:26.446 16:13:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.446 16:13:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.446 16:13:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.446 16:13:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:26.446 16:13:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:26.446 16:13:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.446 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.346 16:13:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:28.346 16:13:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.346 16:13:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.346 16:13:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.346 16:13:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.346 16:13:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.346 16:13:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.346 16:13:29 -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.346 16:13:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.346 16:13:29 -- nvmf/common.sh@296 -- # e810=() 00:17:28.346 16:13:29 -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.346 16:13:29 -- nvmf/common.sh@297 -- # x722=() 00:17:28.346 16:13:29 -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.346 16:13:29 -- nvmf/common.sh@298 -- # mlx=() 00:17:28.346 16:13:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.346 16:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.346 16:13:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.346 16:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:28.346 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:28.346 16:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.346 16:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:28.346 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:28.346 16:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.346 16:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.346 16:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.346 16:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:28.346 Found net devices under 0000:09:00.0: cvl_0_0 00:17:28.346 16:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.346 16:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.346 16:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.346 16:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:28.346 Found net devices under 0000:09:00.1: cvl_0_1 00:17:28.346 16:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:28.346 16:13:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:28.346 16:13:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.346 16:13:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.346 16:13:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.346 16:13:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.346 16:13:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.346 16:13:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.346 16:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.346 16:13:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.346 16:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.346 16:13:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.346 16:13:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.346 16:13:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.346 16:13:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.346 16:13:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.346 16:13:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.346 16:13:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.346 16:13:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.346 16:13:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.346 16:13:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:28.346 00:17:28.346 --- 10.0.0.2 ping statistics --- 00:17:28.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.346 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:28.346 16:13:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:17:28.346 00:17:28.346 --- 10.0.0.1 ping statistics --- 00:17:28.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.346 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:17:28.346 16:13:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.346 16:13:29 -- nvmf/common.sh@411 -- # return 0 00:17:28.346 16:13:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:28.346 16:13:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.346 16:13:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:28.346 16:13:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.346 16:13:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:28.346 16:13:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:28.604 16:13:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:28.604 16:13:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:28.604 16:13:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:28.604 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:17:28.604 16:13:29 -- nvmf/common.sh@470 -- # nvmfpid=3430471 00:17:28.604 16:13:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.604 16:13:29 -- nvmf/common.sh@471 -- # waitforlisten 3430471 00:17:28.604 16:13:29 -- common/autotest_common.sh@817 -- # '[' -z 3430471 ']' 00:17:28.604 16:13:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.604 16:13:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:28.604 16:13:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.604 16:13:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:28.604 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:17:28.604 [2024-04-24 16:13:29.683079] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:28.604 [2024-04-24 16:13:29.683159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.604 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.604 [2024-04-24 16:13:29.752734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.604 [2024-04-24 16:13:29.869864] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.604 [2024-04-24 16:13:29.869914] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.604 [2024-04-24 16:13:29.869929] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.604 [2024-04-24 16:13:29.869942] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.604 [2024-04-24 16:13:29.869953] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.604 [2024-04-24 16:13:29.870038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.604 [2024-04-24 16:13:29.870111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.604 [2024-04-24 16:13:29.870162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.604 [2024-04-24 16:13:29.870165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.537 16:13:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.537 16:13:30 -- common/autotest_common.sh@850 -- # return 0 00:17:29.537 16:13:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:29.537 16:13:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 16:13:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.537 16:13:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 [2024-04-24 16:13:30.695828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 Malloc0 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 [2024-04-24 16:13:30.746537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:29.537 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.537 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 [2024-04-24 16:13:30.754301] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:29.537 [ 00:17:29.537 { 00:17:29.537 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:29.537 "subtype": "Discovery", 00:17:29.537 "listen_addresses": [], 00:17:29.537 "allow_any_host": true, 00:17:29.537 "hosts": [] 00:17:29.537 }, 00:17:29.537 { 00:17:29.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.537 "subtype": "NVMe", 00:17:29.537 "listen_addresses": [ 00:17:29.537 { 00:17:29.537 "transport": "TCP", 00:17:29.537 "trtype": "TCP", 00:17:29.537 "adrfam": "IPv4", 00:17:29.537 "traddr": "10.0.0.2", 00:17:29.537 "trsvcid": "4420" 00:17:29.537 } 00:17:29.537 ], 00:17:29.537 "allow_any_host": true, 00:17:29.537 "hosts": [], 00:17:29.537 "serial_number": "SPDK00000000000001", 00:17:29.537 "model_number": "SPDK bdev Controller", 00:17:29.537 "max_namespaces": 2, 00:17:29.537 "min_cntlid": 1, 00:17:29.537 "max_cntlid": 65519, 00:17:29.537 "namespaces": [ 00:17:29.537 { 00:17:29.537 "nsid": 1, 00:17:29.537 "bdev_name": "Malloc0", 00:17:29.537 "name": "Malloc0", 00:17:29.537 "nguid": "CB4798F7B24B49849315B4E29F913493", 00:17:29.537 "uuid": "cb4798f7-b24b-4984-9315-b4e29f913493" 00:17:29.537 } 00:17:29.537 ] 00:17:29.537 } 00:17:29.537 ] 00:17:29.537 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.537 16:13:30 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:29.537 16:13:30 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:29.537 16:13:30 -- host/aer.sh@33 -- # aerpid=3430627 00:17:29.537 16:13:30 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:29.537 16:13:30 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:29.537 16:13:30 -- common/autotest_common.sh@1251 -- # local i=0 00:17:29.537 16:13:30 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.537 16:13:30 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:17:29.537 16:13:30 -- common/autotest_common.sh@1254 -- # i=1 00:17:29.537 16:13:30 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:29.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.795 16:13:30 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.795 16:13:30 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:17:29.795 16:13:30 -- common/autotest_common.sh@1254 -- # i=2 00:17:29.795 16:13:30 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:29.795 16:13:30 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.795 16:13:30 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.795 16:13:30 -- common/autotest_common.sh@1262 -- # return 0 00:17:29.795 16:13:30 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:29.795 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.795 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:29.795 Malloc1 00:17:29.795 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.795 16:13:31 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:29.795 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.795 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:29.795 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.795 16:13:31 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:29.795 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.795 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:29.795 Asynchronous Event Request test 00:17:29.795 Attaching to 10.0.0.2 00:17:29.795 Attached to 10.0.0.2 00:17:29.795 Registering asynchronous event callbacks... 00:17:29.795 Starting namespace attribute notice tests for all controllers... 00:17:29.795 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:29.795 aer_cb - Changed Namespace 00:17:29.795 Cleaning up... 00:17:29.795 [ 00:17:29.795 { 00:17:29.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:29.795 "subtype": "Discovery", 00:17:29.795 "listen_addresses": [], 00:17:29.795 "allow_any_host": true, 00:17:29.795 "hosts": [] 00:17:29.795 }, 00:17:29.795 { 00:17:29.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.795 "subtype": "NVMe", 00:17:29.795 "listen_addresses": [ 00:17:29.795 { 00:17:29.795 "transport": "TCP", 00:17:29.795 "trtype": "TCP", 00:17:29.795 "adrfam": "IPv4", 00:17:29.795 "traddr": "10.0.0.2", 00:17:29.795 "trsvcid": "4420" 00:17:29.795 } 00:17:29.795 ], 00:17:29.795 "allow_any_host": true, 00:17:29.795 "hosts": [], 00:17:29.795 "serial_number": "SPDK00000000000001", 00:17:29.795 "model_number": "SPDK bdev Controller", 00:17:29.795 "max_namespaces": 2, 00:17:29.795 "min_cntlid": 1, 00:17:29.795 "max_cntlid": 65519, 00:17:29.795 "namespaces": [ 00:17:29.795 { 00:17:29.795 "nsid": 1, 00:17:29.795 "bdev_name": "Malloc0", 00:17:29.795 "name": "Malloc0", 00:17:29.795 "nguid": "CB4798F7B24B49849315B4E29F913493", 00:17:29.795 "uuid": "cb4798f7-b24b-4984-9315-b4e29f913493" 00:17:29.795 }, 00:17:29.795 { 00:17:29.795 "nsid": 2, 00:17:29.795 "bdev_name": "Malloc1", 00:17:29.795 "name": "Malloc1", 00:17:29.795 "nguid": "19B4F03B65E749B594BD7458C191946D", 00:17:29.795 "uuid": "19b4f03b-65e7-49b5-94bd-7458c191946d" 00:17:29.795 } 00:17:29.795 ] 00:17:29.795 } 00:17:29.795 ] 00:17:29.795 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.795 16:13:31 -- host/aer.sh@43 -- # wait 3430627 00:17:29.795 16:13:31 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:29.795 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.795 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:29.795 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.795 16:13:31 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:29.795 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.795 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:30.053 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.053 16:13:31 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.053 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.053 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:30.053 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.053 16:13:31 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:30.053 16:13:31 -- host/aer.sh@51 -- # nvmftestfini 00:17:30.053 16:13:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:30.053 16:13:31 -- nvmf/common.sh@117 -- # sync 00:17:30.053 16:13:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.053 16:13:31 -- nvmf/common.sh@120 -- # set +e 00:17:30.053 16:13:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.053 16:13:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.053 rmmod nvme_tcp 00:17:30.053 rmmod nvme_fabrics 00:17:30.053 rmmod nvme_keyring 00:17:30.053 16:13:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.053 16:13:31 -- nvmf/common.sh@124 -- # set -e 00:17:30.053 16:13:31 -- nvmf/common.sh@125 -- # return 0 00:17:30.053 16:13:31 -- nvmf/common.sh@478 -- # '[' -n 3430471 ']' 00:17:30.053 16:13:31 -- nvmf/common.sh@479 -- # killprocess 3430471 00:17:30.053 16:13:31 -- common/autotest_common.sh@936 -- # '[' -z 3430471 ']' 00:17:30.053 16:13:31 -- common/autotest_common.sh@940 -- # kill -0 3430471 00:17:30.053 16:13:31 -- common/autotest_common.sh@941 -- # uname 00:17:30.053 16:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.053 16:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3430471 00:17:30.053 16:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.053 16:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.053 16:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3430471' 00:17:30.053 killing process with pid 3430471 00:17:30.053 16:13:31 -- common/autotest_common.sh@955 -- # kill 3430471 00:17:30.053 [2024-04-24 16:13:31.172464] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:30.053 16:13:31 -- common/autotest_common.sh@960 -- # wait 3430471 00:17:30.311 16:13:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.311 16:13:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.311 16:13:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.311 16:13:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.311 16:13:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.311 16:13:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.311 16:13:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.311 16:13:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.214 16:13:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.214 00:17:32.214 real 0m5.947s 00:17:32.214 user 0m6.955s 00:17:32.214 sys 0m1.866s 00:17:32.214 16:13:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.214 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:32.214 ************************************ 00:17:32.214 END TEST nvmf_aer 00:17:32.214 ************************************ 00:17:32.473 16:13:33 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:32.473 16:13:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.473 16:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.473 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:32.473 ************************************ 00:17:32.473 START TEST nvmf_async_init 00:17:32.473 ************************************ 00:17:32.473 16:13:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:32.473 * Looking for test storage... 00:17:32.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:32.473 16:13:33 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.473 16:13:33 -- nvmf/common.sh@7 -- # uname -s 00:17:32.473 16:13:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.473 16:13:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.473 16:13:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.473 16:13:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.473 16:13:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.473 16:13:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.473 16:13:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.473 16:13:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.473 16:13:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.473 16:13:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.473 16:13:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.473 16:13:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.473 16:13:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.473 16:13:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.473 16:13:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.473 16:13:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.473 16:13:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.473 16:13:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.473 16:13:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.473 16:13:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.473 16:13:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.473 16:13:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.473 16:13:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.473 16:13:33 -- paths/export.sh@5 -- # export PATH 00:17:32.473 16:13:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.473 16:13:33 -- nvmf/common.sh@47 -- # : 0 00:17:32.473 16:13:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.473 16:13:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.473 16:13:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.473 16:13:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.473 16:13:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.473 16:13:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.473 16:13:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.473 16:13:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.473 16:13:33 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:32.473 16:13:33 -- host/async_init.sh@14 -- # null_block_size=512 00:17:32.473 16:13:33 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:32.473 16:13:33 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:32.473 16:13:33 -- host/async_init.sh@20 -- # uuidgen 00:17:32.473 16:13:33 -- host/async_init.sh@20 -- # tr -d - 00:17:32.473 16:13:33 -- host/async_init.sh@20 -- # nguid=a4075839285946b296e28c6c62c14120 00:17:32.473 16:13:33 -- host/async_init.sh@22 -- # nvmftestinit 00:17:32.473 16:13:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.473 16:13:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.473 16:13:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.473 16:13:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.473 16:13:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.473 16:13:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.473 16:13:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.473 16:13:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.473 16:13:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:32.473 16:13:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:32.473 16:13:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.473 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:34.373 16:13:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:34.373 16:13:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.373 16:13:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.373 16:13:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.373 16:13:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.373 16:13:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.373 16:13:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.373 16:13:35 -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.373 16:13:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.373 16:13:35 -- nvmf/common.sh@296 -- # e810=() 00:17:34.373 16:13:35 -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.373 16:13:35 -- nvmf/common.sh@297 -- # x722=() 00:17:34.373 16:13:35 -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.373 16:13:35 -- nvmf/common.sh@298 -- # mlx=() 00:17:34.373 16:13:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.373 16:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.373 16:13:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.373 16:13:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.373 16:13:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.373 16:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.373 16:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:34.373 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:34.373 16:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.373 16:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.373 16:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:34.373 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:34.374 16:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.374 16:13:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.374 16:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.374 16:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:34.374 16:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.374 16:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:34.374 Found net devices under 0000:09:00.0: cvl_0_0 00:17:34.374 16:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.374 16:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.374 16:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.374 16:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:34.374 16:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.374 16:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:34.374 Found net devices under 0000:09:00.1: cvl_0_1 00:17:34.374 16:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.374 16:13:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:34.374 16:13:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:34.374 16:13:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:34.374 16:13:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:34.374 16:13:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.374 16:13:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.374 16:13:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.374 16:13:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.374 16:13:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.374 16:13:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.374 16:13:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.374 16:13:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.374 16:13:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.374 16:13:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.374 16:13:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.374 16:13:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.374 16:13:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.632 16:13:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.632 16:13:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.632 16:13:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.632 16:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.632 16:13:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.632 16:13:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.632 16:13:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:17:34.632 00:17:34.632 --- 10.0.0.2 ping statistics --- 00:17:34.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.632 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:34.632 16:13:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:17:34.632 00:17:34.632 --- 10.0.0.1 ping statistics --- 00:17:34.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.632 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:34.632 16:13:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.632 16:13:35 -- nvmf/common.sh@411 -- # return 0 00:17:34.632 16:13:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:34.632 16:13:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.632 16:13:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:34.632 16:13:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:34.632 16:13:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.632 16:13:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:34.632 16:13:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:34.632 16:13:35 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:34.632 16:13:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:34.632 16:13:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:34.632 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:34.632 16:13:35 -- nvmf/common.sh@470 -- # nvmfpid=3432685 00:17:34.632 16:13:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:34.632 16:13:35 -- nvmf/common.sh@471 -- # waitforlisten 3432685 00:17:34.632 16:13:35 -- common/autotest_common.sh@817 -- # '[' -z 3432685 ']' 00:17:34.632 16:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.632 16:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:34.632 16:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.632 16:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:34.632 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:34.632 [2024-04-24 16:13:35.798587] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:34.632 [2024-04-24 16:13:35.798673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.632 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.632 [2024-04-24 16:13:35.861215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.891 [2024-04-24 16:13:35.963283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.891 [2024-04-24 16:13:35.963335] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.891 [2024-04-24 16:13:35.963365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.891 [2024-04-24 16:13:35.963377] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.891 [2024-04-24 16:13:35.963388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.891 [2024-04-24 16:13:35.963430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.891 16:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.891 16:13:36 -- common/autotest_common.sh@850 -- # return 0 00:17:34.891 16:13:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:34.891 16:13:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 16:13:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.891 16:13:36 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 [2024-04-24 16:13:36.106208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 null0 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a4075839285946b296e28c6c62c14120 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 [2024-04-24 16:13:36.146497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.891 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.891 16:13:36 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:34.891 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.891 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.152 nvme0n1 00:17:35.152 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.152 16:13:36 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:35.152 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.152 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.152 [ 00:17:35.152 { 00:17:35.152 "name": "nvme0n1", 00:17:35.152 "aliases": [ 00:17:35.152 "a4075839-2859-46b2-96e2-8c6c62c14120" 00:17:35.152 ], 00:17:35.152 "product_name": "NVMe disk", 00:17:35.152 "block_size": 512, 00:17:35.152 "num_blocks": 2097152, 00:17:35.152 "uuid": "a4075839-2859-46b2-96e2-8c6c62c14120", 00:17:35.152 "assigned_rate_limits": { 00:17:35.152 "rw_ios_per_sec": 0, 00:17:35.152 "rw_mbytes_per_sec": 0, 00:17:35.152 "r_mbytes_per_sec": 0, 00:17:35.152 "w_mbytes_per_sec": 0 00:17:35.152 }, 00:17:35.152 "claimed": false, 00:17:35.152 "zoned": false, 00:17:35.152 "supported_io_types": { 00:17:35.152 "read": true, 00:17:35.152 "write": true, 00:17:35.152 "unmap": false, 00:17:35.152 "write_zeroes": true, 00:17:35.152 "flush": true, 00:17:35.152 "reset": true, 00:17:35.152 "compare": true, 00:17:35.152 "compare_and_write": true, 00:17:35.152 "abort": true, 00:17:35.152 "nvme_admin": true, 00:17:35.152 "nvme_io": true 00:17:35.152 }, 00:17:35.152 "memory_domains": [ 00:17:35.152 { 00:17:35.152 "dma_device_id": "system", 00:17:35.152 "dma_device_type": 1 00:17:35.152 } 00:17:35.152 ], 00:17:35.152 "driver_specific": { 00:17:35.152 "nvme": [ 00:17:35.152 { 00:17:35.152 "trid": { 00:17:35.152 "trtype": "TCP", 00:17:35.152 "adrfam": "IPv4", 00:17:35.152 "traddr": "10.0.0.2", 00:17:35.152 "trsvcid": "4420", 00:17:35.152 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.152 }, 00:17:35.152 "ctrlr_data": { 00:17:35.152 "cntlid": 1, 00:17:35.152 "vendor_id": "0x8086", 00:17:35.152 "model_number": "SPDK bdev Controller", 00:17:35.152 "serial_number": "00000000000000000000", 00:17:35.152 "firmware_revision": "24.05", 00:17:35.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.152 "oacs": { 00:17:35.152 "security": 0, 00:17:35.152 "format": 0, 00:17:35.152 "firmware": 0, 00:17:35.152 "ns_manage": 0 00:17:35.152 }, 00:17:35.152 "multi_ctrlr": true, 00:17:35.152 "ana_reporting": false 00:17:35.152 }, 00:17:35.152 "vs": { 00:17:35.152 "nvme_version": "1.3" 00:17:35.152 }, 00:17:35.152 "ns_data": { 00:17:35.152 "id": 1, 00:17:35.152 "can_share": true 00:17:35.152 } 00:17:35.152 } 00:17:35.152 ], 00:17:35.152 "mp_policy": "active_passive" 00:17:35.152 } 00:17:35.152 } 00:17:35.152 ] 00:17:35.152 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.152 16:13:36 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:35.152 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.152 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.152 [2024-04-24 16:13:36.399118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:35.152 [2024-04-24 16:13:36.399206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb9f0 (9): Bad file descriptor 00:17:35.412 [2024-04-24 16:13:36.541894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 [ 00:17:35.412 { 00:17:35.412 "name": "nvme0n1", 00:17:35.412 "aliases": [ 00:17:35.412 "a4075839-2859-46b2-96e2-8c6c62c14120" 00:17:35.412 ], 00:17:35.412 "product_name": "NVMe disk", 00:17:35.412 "block_size": 512, 00:17:35.412 "num_blocks": 2097152, 00:17:35.412 "uuid": "a4075839-2859-46b2-96e2-8c6c62c14120", 00:17:35.412 "assigned_rate_limits": { 00:17:35.412 "rw_ios_per_sec": 0, 00:17:35.412 "rw_mbytes_per_sec": 0, 00:17:35.412 "r_mbytes_per_sec": 0, 00:17:35.412 "w_mbytes_per_sec": 0 00:17:35.412 }, 00:17:35.412 "claimed": false, 00:17:35.412 "zoned": false, 00:17:35.412 "supported_io_types": { 00:17:35.412 "read": true, 00:17:35.412 "write": true, 00:17:35.412 "unmap": false, 00:17:35.412 "write_zeroes": true, 00:17:35.412 "flush": true, 00:17:35.412 "reset": true, 00:17:35.412 "compare": true, 00:17:35.412 "compare_and_write": true, 00:17:35.412 "abort": true, 00:17:35.412 "nvme_admin": true, 00:17:35.412 "nvme_io": true 00:17:35.412 }, 00:17:35.412 "memory_domains": [ 00:17:35.412 { 00:17:35.412 "dma_device_id": "system", 00:17:35.412 "dma_device_type": 1 00:17:35.412 } 00:17:35.412 ], 00:17:35.412 "driver_specific": { 00:17:35.412 "nvme": [ 00:17:35.412 { 00:17:35.412 "trid": { 00:17:35.412 "trtype": "TCP", 00:17:35.412 "adrfam": "IPv4", 00:17:35.412 "traddr": "10.0.0.2", 00:17:35.412 "trsvcid": "4420", 00:17:35.412 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.412 }, 00:17:35.412 "ctrlr_data": { 00:17:35.412 "cntlid": 2, 00:17:35.412 "vendor_id": "0x8086", 00:17:35.412 "model_number": "SPDK bdev Controller", 00:17:35.412 "serial_number": "00000000000000000000", 00:17:35.412 "firmware_revision": "24.05", 00:17:35.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.412 "oacs": { 00:17:35.412 "security": 0, 00:17:35.412 "format": 0, 00:17:35.412 "firmware": 0, 00:17:35.412 "ns_manage": 0 00:17:35.412 }, 00:17:35.412 "multi_ctrlr": true, 00:17:35.412 "ana_reporting": false 00:17:35.412 }, 00:17:35.412 "vs": { 00:17:35.412 "nvme_version": "1.3" 00:17:35.412 }, 00:17:35.412 "ns_data": { 00:17:35.412 "id": 1, 00:17:35.412 "can_share": true 00:17:35.412 } 00:17:35.412 } 00:17:35.412 ], 00:17:35.412 "mp_policy": "active_passive" 00:17:35.412 } 00:17:35.412 } 00:17:35.412 ] 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@53 -- # mktemp 00:17:35.412 16:13:36 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eQqGbyVZfI 00:17:35.412 16:13:36 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:35.412 16:13:36 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eQqGbyVZfI 00:17:35.412 16:13:36 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 [2024-04-24 16:13:36.591725] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.412 [2024-04-24 16:13:36.591869] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eQqGbyVZfI 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 [2024-04-24 16:13:36.599758] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eQqGbyVZfI 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 [2024-04-24 16:13:36.607791] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.412 [2024-04-24 16:13:36.607854] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:35.412 nvme0n1 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.412 [ 00:17:35.412 { 00:17:35.412 "name": "nvme0n1", 00:17:35.412 "aliases": [ 00:17:35.412 "a4075839-2859-46b2-96e2-8c6c62c14120" 00:17:35.412 ], 00:17:35.412 "product_name": "NVMe disk", 00:17:35.412 "block_size": 512, 00:17:35.412 "num_blocks": 2097152, 00:17:35.412 "uuid": "a4075839-2859-46b2-96e2-8c6c62c14120", 00:17:35.412 "assigned_rate_limits": { 00:17:35.412 "rw_ios_per_sec": 0, 00:17:35.412 "rw_mbytes_per_sec": 0, 00:17:35.412 "r_mbytes_per_sec": 0, 00:17:35.412 "w_mbytes_per_sec": 0 00:17:35.412 }, 00:17:35.412 "claimed": false, 00:17:35.412 "zoned": false, 00:17:35.412 "supported_io_types": { 00:17:35.412 "read": true, 00:17:35.412 "write": true, 00:17:35.412 "unmap": false, 00:17:35.412 "write_zeroes": true, 00:17:35.412 "flush": true, 00:17:35.412 "reset": true, 00:17:35.412 "compare": true, 00:17:35.412 "compare_and_write": true, 00:17:35.412 "abort": true, 00:17:35.412 "nvme_admin": true, 00:17:35.412 "nvme_io": true 00:17:35.412 }, 00:17:35.412 "memory_domains": [ 00:17:35.412 { 00:17:35.412 "dma_device_id": "system", 00:17:35.412 "dma_device_type": 1 00:17:35.412 } 00:17:35.412 ], 00:17:35.412 "driver_specific": { 00:17:35.412 "nvme": [ 00:17:35.412 { 00:17:35.412 "trid": { 00:17:35.412 "trtype": "TCP", 00:17:35.412 "adrfam": "IPv4", 00:17:35.412 "traddr": "10.0.0.2", 00:17:35.412 "trsvcid": "4421", 00:17:35.412 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.412 }, 00:17:35.412 "ctrlr_data": { 00:17:35.412 "cntlid": 3, 00:17:35.412 "vendor_id": "0x8086", 00:17:35.412 "model_number": "SPDK bdev Controller", 00:17:35.412 "serial_number": "00000000000000000000", 00:17:35.412 "firmware_revision": "24.05", 00:17:35.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.412 "oacs": { 00:17:35.412 "security": 0, 00:17:35.412 "format": 0, 00:17:35.412 "firmware": 0, 00:17:35.412 "ns_manage": 0 00:17:35.412 }, 00:17:35.412 "multi_ctrlr": true, 00:17:35.412 "ana_reporting": false 00:17:35.412 }, 00:17:35.412 "vs": { 00:17:35.412 "nvme_version": "1.3" 00:17:35.412 }, 00:17:35.412 "ns_data": { 00:17:35.412 "id": 1, 00:17:35.412 "can_share": true 00:17:35.412 } 00:17:35.412 } 00:17:35.412 ], 00:17:35.412 "mp_policy": "active_passive" 00:17:35.412 } 00:17:35.412 } 00:17:35.412 ] 00:17:35.412 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.412 16:13:36 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.412 16:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.412 16:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.672 16:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.672 16:13:36 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.eQqGbyVZfI 00:17:35.672 16:13:36 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:35.672 16:13:36 -- host/async_init.sh@78 -- # nvmftestfini 00:17:35.672 16:13:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:35.672 16:13:36 -- nvmf/common.sh@117 -- # sync 00:17:35.672 16:13:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.672 16:13:36 -- nvmf/common.sh@120 -- # set +e 00:17:35.672 16:13:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.672 16:13:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.672 rmmod nvme_tcp 00:17:35.672 rmmod nvme_fabrics 00:17:35.672 rmmod nvme_keyring 00:17:35.672 16:13:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.672 16:13:36 -- nvmf/common.sh@124 -- # set -e 00:17:35.672 16:13:36 -- nvmf/common.sh@125 -- # return 0 00:17:35.672 16:13:36 -- nvmf/common.sh@478 -- # '[' -n 3432685 ']' 00:17:35.672 16:13:36 -- nvmf/common.sh@479 -- # killprocess 3432685 00:17:35.672 16:13:36 -- common/autotest_common.sh@936 -- # '[' -z 3432685 ']' 00:17:35.672 16:13:36 -- common/autotest_common.sh@940 -- # kill -0 3432685 00:17:35.672 16:13:36 -- common/autotest_common.sh@941 -- # uname 00:17:35.672 16:13:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.672 16:13:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3432685 00:17:35.672 16:13:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:35.672 16:13:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:35.672 16:13:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3432685' 00:17:35.672 killing process with pid 3432685 00:17:35.672 16:13:36 -- common/autotest_common.sh@955 -- # kill 3432685 00:17:35.672 [2024-04-24 16:13:36.771903] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:35.672 [2024-04-24 16:13:36.771941] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:35.672 16:13:36 -- common/autotest_common.sh@960 -- # wait 3432685 00:17:35.931 16:13:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:35.931 16:13:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:35.931 16:13:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:35.931 16:13:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.931 16:13:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.931 16:13:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.931 16:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.931 16:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.836 16:13:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.836 00:17:37.836 real 0m5.451s 00:17:37.836 user 0m2.031s 00:17:37.836 sys 0m1.777s 00:17:37.836 16:13:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.836 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:37.836 ************************************ 00:17:37.836 END TEST nvmf_async_init 00:17:37.836 ************************************ 00:17:37.836 16:13:39 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:37.836 16:13:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:37.836 16:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.836 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:38.095 ************************************ 00:17:38.095 START TEST dma 00:17:38.095 ************************************ 00:17:38.095 16:13:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:38.095 * Looking for test storage... 00:17:38.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:38.095 16:13:39 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.095 16:13:39 -- nvmf/common.sh@7 -- # uname -s 00:17:38.095 16:13:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.096 16:13:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.096 16:13:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.096 16:13:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.096 16:13:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.096 16:13:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.096 16:13:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.096 16:13:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.096 16:13:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.096 16:13:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.096 16:13:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.096 16:13:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.096 16:13:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.096 16:13:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.096 16:13:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.096 16:13:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.096 16:13:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.096 16:13:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.096 16:13:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.096 16:13:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.096 16:13:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.096 16:13:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.096 16:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.096 16:13:39 -- paths/export.sh@5 -- # export PATH 00:17:38.096 16:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.096 16:13:39 -- nvmf/common.sh@47 -- # : 0 00:17:38.096 16:13:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.096 16:13:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.096 16:13:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.096 16:13:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.096 16:13:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.096 16:13:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.096 16:13:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.096 16:13:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.096 16:13:39 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:38.096 16:13:39 -- host/dma.sh@13 -- # exit 0 00:17:38.096 00:17:38.096 real 0m0.066s 00:17:38.096 user 0m0.028s 00:17:38.096 sys 0m0.044s 00:17:38.096 16:13:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:38.096 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 ************************************ 00:17:38.096 END TEST dma 00:17:38.096 ************************************ 00:17:38.096 16:13:39 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:38.096 16:13:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:38.096 16:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.096 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 ************************************ 00:17:38.096 START TEST nvmf_identify 00:17:38.096 ************************************ 00:17:38.096 16:13:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:38.355 * Looking for test storage... 00:17:38.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:38.355 16:13:39 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.355 16:13:39 -- nvmf/common.sh@7 -- # uname -s 00:17:38.355 16:13:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.355 16:13:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.355 16:13:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.355 16:13:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.355 16:13:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.355 16:13:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.355 16:13:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.355 16:13:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.355 16:13:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.355 16:13:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.355 16:13:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.355 16:13:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.355 16:13:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.355 16:13:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.355 16:13:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.355 16:13:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.355 16:13:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.355 16:13:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.355 16:13:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.355 16:13:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.355 16:13:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.355 16:13:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.355 16:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.355 16:13:39 -- paths/export.sh@5 -- # export PATH 00:17:38.355 16:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.355 16:13:39 -- nvmf/common.sh@47 -- # : 0 00:17:38.355 16:13:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.355 16:13:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.355 16:13:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.355 16:13:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.355 16:13:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.355 16:13:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.355 16:13:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.355 16:13:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.355 16:13:39 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.355 16:13:39 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.355 16:13:39 -- host/identify.sh@14 -- # nvmftestinit 00:17:38.355 16:13:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:38.355 16:13:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.355 16:13:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:38.355 16:13:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:38.355 16:13:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:38.355 16:13:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.355 16:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.355 16:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.355 16:13:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:38.355 16:13:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:38.355 16:13:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.355 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 16:13:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:40.261 16:13:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.261 16:13:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.261 16:13:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.261 16:13:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.261 16:13:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.261 16:13:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.261 16:13:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.261 16:13:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.261 16:13:41 -- nvmf/common.sh@296 -- # e810=() 00:17:40.261 16:13:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.261 16:13:41 -- nvmf/common.sh@297 -- # x722=() 00:17:40.261 16:13:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.261 16:13:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:40.261 16:13:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.261 16:13:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.261 16:13:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.261 16:13:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:40.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:40.261 16:13:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.261 16:13:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:40.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:40.261 16:13:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.261 16:13:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.261 16:13:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.261 16:13:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:40.261 Found net devices under 0000:09:00.0: cvl_0_0 00:17:40.261 16:13:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.261 16:13:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.261 16:13:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.261 16:13:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:40.261 Found net devices under 0000:09:00.1: cvl_0_1 00:17:40.261 16:13:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:40.261 16:13:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:40.261 16:13:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.261 16:13:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.261 16:13:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.261 16:13:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.261 16:13:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.261 16:13:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.261 16:13:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.261 16:13:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.261 16:13:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.261 16:13:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.261 16:13:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.261 16:13:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.261 16:13:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.261 16:13:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.261 16:13:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.261 16:13:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.261 16:13:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.261 16:13:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.261 16:13:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:17:40.261 00:17:40.261 --- 10.0.0.2 ping statistics --- 00:17:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.261 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:40.261 16:13:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:17:40.261 00:17:40.261 --- 10.0.0.1 ping statistics --- 00:17:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.261 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:40.261 16:13:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.261 16:13:41 -- nvmf/common.sh@411 -- # return 0 00:17:40.261 16:13:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:40.261 16:13:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.261 16:13:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:40.261 16:13:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.261 16:13:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:40.261 16:13:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:40.261 16:13:41 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:40.261 16:13:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:40.261 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 16:13:41 -- host/identify.sh@19 -- # nvmfpid=3434825 00:17:40.261 16:13:41 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.261 16:13:41 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.261 16:13:41 -- host/identify.sh@23 -- # waitforlisten 3434825 00:17:40.261 16:13:41 -- common/autotest_common.sh@817 -- # '[' -z 3434825 ']' 00:17:40.261 16:13:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.261 16:13:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.261 16:13:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.261 16:13:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.261 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 [2024-04-24 16:13:41.513856] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:40.261 [2024-04-24 16:13:41.513950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.523 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.523 [2024-04-24 16:13:41.579175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.523 [2024-04-24 16:13:41.685968] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.523 [2024-04-24 16:13:41.686021] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.523 [2024-04-24 16:13:41.686049] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.523 [2024-04-24 16:13:41.686061] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.523 [2024-04-24 16:13:41.686072] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.523 [2024-04-24 16:13:41.686203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.523 [2024-04-24 16:13:41.686266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.523 [2024-04-24 16:13:41.686333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.523 [2024-04-24 16:13:41.686336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.523 16:13:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.523 16:13:41 -- common/autotest_common.sh@850 -- # return 0 00:17:40.523 16:13:41 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.523 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.523 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 [2024-04-24 16:13:41.813309] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:40.784 16:13:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 16:13:41 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 Malloc0 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 [2024-04-24 16:13:41.890777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:40.784 16:13:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.784 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:40.784 [2024-04-24 16:13:41.906523] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:40.784 [ 00:17:40.784 { 00:17:40.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:40.784 "subtype": "Discovery", 00:17:40.784 "listen_addresses": [ 00:17:40.784 { 00:17:40.784 "transport": "TCP", 00:17:40.784 "trtype": "TCP", 00:17:40.784 "adrfam": "IPv4", 00:17:40.784 "traddr": "10.0.0.2", 00:17:40.784 "trsvcid": "4420" 00:17:40.784 } 00:17:40.784 ], 00:17:40.784 "allow_any_host": true, 00:17:40.784 "hosts": [] 00:17:40.784 }, 00:17:40.784 { 00:17:40.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.784 "subtype": "NVMe", 00:17:40.784 "listen_addresses": [ 00:17:40.784 { 00:17:40.784 "transport": "TCP", 00:17:40.784 "trtype": "TCP", 00:17:40.784 "adrfam": "IPv4", 00:17:40.784 "traddr": "10.0.0.2", 00:17:40.784 "trsvcid": "4420" 00:17:40.784 } 00:17:40.784 ], 00:17:40.784 "allow_any_host": true, 00:17:40.784 "hosts": [], 00:17:40.784 "serial_number": "SPDK00000000000001", 00:17:40.784 "model_number": "SPDK bdev Controller", 00:17:40.784 "max_namespaces": 32, 00:17:40.784 "min_cntlid": 1, 00:17:40.784 "max_cntlid": 65519, 00:17:40.784 "namespaces": [ 00:17:40.784 { 00:17:40.784 "nsid": 1, 00:17:40.784 "bdev_name": "Malloc0", 00:17:40.784 "name": "Malloc0", 00:17:40.784 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:40.784 "eui64": "ABCDEF0123456789", 00:17:40.784 "uuid": "fa63edf9-2918-4487-b663-94275136b105" 00:17:40.784 } 00:17:40.784 ] 00:17:40.784 } 00:17:40.784 ] 00:17:40.784 16:13:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.784 16:13:41 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:40.784 [2024-04-24 16:13:41.933253] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:40.785 [2024-04-24 16:13:41.933297] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434854 ] 00:17:40.785 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.785 [2024-04-24 16:13:41.970715] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:40.785 [2024-04-24 16:13:41.974798] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:40.785 [2024-04-24 16:13:41.974811] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:40.785 [2024-04-24 16:13:41.974828] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:40.785 [2024-04-24 16:13:41.974840] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:40.785 [2024-04-24 16:13:41.975104] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:40.785 [2024-04-24 16:13:41.975157] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2472d00 0 00:17:40.785 [2024-04-24 16:13:41.989762] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:40.785 [2024-04-24 16:13:41.989795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:40.785 [2024-04-24 16:13:41.989804] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:40.785 [2024-04-24 16:13:41.989810] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:40.785 [2024-04-24 16:13:41.989862] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.989875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.989882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.989899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:40.785 [2024-04-24 16:13:41.989926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.997757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.997777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.997785] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.997793] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.997817] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:40.785 [2024-04-24 16:13:41.997833] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:40.785 [2024-04-24 16:13:41.997843] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:40.785 [2024-04-24 16:13:41.997864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.997873] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.997880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.997891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.997916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.998102] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.998118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.998125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.998144] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:40.785 [2024-04-24 16:13:41.998158] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:40.785 [2024-04-24 16:13:41.998174] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998189] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.998202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.998225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.998454] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.998470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.998478] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.998496] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:40.785 [2024-04-24 16:13:41.998511] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.998527] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998535] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998542] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.998553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.998575] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.998711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.998728] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.998735] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.998768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.998788] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998799] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.998806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.998817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.998840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.998970] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.998987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.998995] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999002] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.999014] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:40.785 [2024-04-24 16:13:41.999023] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.999037] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.999151] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:40.785 [2024-04-24 16:13:41.999160] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.999174] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999189] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.999200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.999236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.999428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.999445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.999452] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999459] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.999469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:40.785 [2024-04-24 16:13:41.999488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999499] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.999517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.999538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:41.999666] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.785 [2024-04-24 16:13:41.999682] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.785 [2024-04-24 16:13:41.999694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999702] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.785 [2024-04-24 16:13:41.999711] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:40.785 [2024-04-24 16:13:41.999720] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:40.785 [2024-04-24 16:13:41.999734] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:40.785 [2024-04-24 16:13:41.999759] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:40.785 [2024-04-24 16:13:41.999779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.785 [2024-04-24 16:13:41.999787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.785 [2024-04-24 16:13:41.999799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.785 [2024-04-24 16:13:41.999821] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.785 [2024-04-24 16:13:42.000026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:40.785 [2024-04-24 16:13:42.000043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:40.786 [2024-04-24 16:13:42.000050] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000058] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2472d00): datao=0, datal=4096, cccid=0 00:17:40.786 [2024-04-24 16:13:42.000071] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d1ec0) on tqpair(0x2472d00): expected_datao=0, payload_size=4096 00:17:40.786 [2024-04-24 16:13:42.000084] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000099] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000113] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000131] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.786 [2024-04-24 16:13:42.000142] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.786 [2024-04-24 16:13:42.000149] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000156] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.786 [2024-04-24 16:13:42.000170] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:40.786 [2024-04-24 16:13:42.000179] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:40.786 [2024-04-24 16:13:42.000187] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:40.786 [2024-04-24 16:13:42.000196] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:40.786 [2024-04-24 16:13:42.000204] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:40.786 [2024-04-24 16:13:42.000212] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:40.786 [2024-04-24 16:13:42.000228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:40.786 [2024-04-24 16:13:42.000243] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000251] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000258] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.786 [2024-04-24 16:13:42.000313] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.786 [2024-04-24 16:13:42.000522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.786 [2024-04-24 16:13:42.000540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.786 [2024-04-24 16:13:42.000547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d1ec0) on tqpair=0x2472d00 00:17:40.786 [2024-04-24 16:13:42.000568] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000575] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000582] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.786 [2024-04-24 16:13:42.000603] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000610] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.786 [2024-04-24 16:13:42.000635] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000642] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.786 [2024-04-24 16:13:42.000667] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000695] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.786 [2024-04-24 16:13:42.000713] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:40.786 [2024-04-24 16:13:42.000756] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:40.786 [2024-04-24 16:13:42.000773] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.000780] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.000791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.786 [2024-04-24 16:13:42.000815] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d1ec0, cid 0, qid 0 00:17:40.786 [2024-04-24 16:13:42.000826] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2020, cid 1, qid 0 00:17:40.786 [2024-04-24 16:13:42.000834] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2180, cid 2, qid 0 00:17:40.786 [2024-04-24 16:13:42.000842] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d22e0, cid 3, qid 0 00:17:40.786 [2024-04-24 16:13:42.000850] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2440, cid 4, qid 0 00:17:40.786 [2024-04-24 16:13:42.001039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.786 [2024-04-24 16:13:42.001063] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.786 [2024-04-24 16:13:42.001073] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.001080] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d2440) on tqpair=0x2472d00 00:17:40.786 [2024-04-24 16:13:42.001091] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:40.786 [2024-04-24 16:13:42.001100] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:40.786 [2024-04-24 16:13:42.001120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.001146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.001158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.786 [2024-04-24 16:13:42.001179] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2440, cid 4, qid 0 00:17:40.786 [2024-04-24 16:13:42.001370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:40.786 [2024-04-24 16:13:42.001390] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:40.786 [2024-04-24 16:13:42.001402] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.001411] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2472d00): datao=0, datal=4096, cccid=4 00:17:40.786 [2024-04-24 16:13:42.001424] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d2440) on tqpair(0x2472d00): expected_datao=0, payload_size=4096 00:17:40.786 [2024-04-24 16:13:42.001436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.001454] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.001462] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.042760] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.786 [2024-04-24 16:13:42.042781] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.786 [2024-04-24 16:13:42.042789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.042796] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d2440) on tqpair=0x2472d00 00:17:40.786 [2024-04-24 16:13:42.042819] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:40.786 [2024-04-24 16:13:42.042853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.042863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.042875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.786 [2024-04-24 16:13:42.042887] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.042895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.042901] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2472d00) 00:17:40.786 [2024-04-24 16:13:42.042911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.786 [2024-04-24 16:13:42.042941] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2440, cid 4, qid 0 00:17:40.786 [2024-04-24 16:13:42.042954] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d25a0, cid 5, qid 0 00:17:40.786 [2024-04-24 16:13:42.043138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:40.786 [2024-04-24 16:13:42.043155] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:40.786 [2024-04-24 16:13:42.043163] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.043170] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2472d00): datao=0, datal=1024, cccid=4 00:17:40.786 [2024-04-24 16:13:42.043182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d2440) on tqpair(0x2472d00): expected_datao=0, payload_size=1024 00:17:40.786 [2024-04-24 16:13:42.043190] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.043200] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.043208] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.043217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:40.786 [2024-04-24 16:13:42.043226] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:40.786 [2024-04-24 16:13:42.043233] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:40.786 [2024-04-24 16:13:42.043240] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d25a0) on tqpair=0x2472d00 00:17:41.047 [2024-04-24 16:13:42.083927] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.047 [2024-04-24 16:13:42.083949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.047 [2024-04-24 16:13:42.083957] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.083965] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d2440) on tqpair=0x2472d00 00:17:41.047 [2024-04-24 16:13:42.083985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.083995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2472d00) 00:17:41.047 [2024-04-24 16:13:42.084007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.047 [2024-04-24 16:13:42.084040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2440, cid 4, qid 0 00:17:41.047 [2024-04-24 16:13:42.084181] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.047 [2024-04-24 16:13:42.084201] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.047 [2024-04-24 16:13:42.084213] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084223] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2472d00): datao=0, datal=3072, cccid=4 00:17:41.047 [2024-04-24 16:13:42.084235] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d2440) on tqpair(0x2472d00): expected_datao=0, payload_size=3072 00:17:41.047 [2024-04-24 16:13:42.084246] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084272] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084284] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084360] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.047 [2024-04-24 16:13:42.084376] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.047 [2024-04-24 16:13:42.084383] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d2440) on tqpair=0x2472d00 00:17:41.047 [2024-04-24 16:13:42.084408] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2472d00) 00:17:41.047 [2024-04-24 16:13:42.084429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.047 [2024-04-24 16:13:42.084459] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d2440, cid 4, qid 0 00:17:41.047 [2024-04-24 16:13:42.084623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.047 [2024-04-24 16:13:42.084640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.047 [2024-04-24 16:13:42.084647] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084653] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2472d00): datao=0, datal=8, cccid=4 00:17:41.047 [2024-04-24 16:13:42.084666] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d2440) on tqpair(0x2472d00): expected_datao=0, payload_size=8 00:17:41.047 [2024-04-24 16:13:42.084674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084684] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.084692] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.124936] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.047 [2024-04-24 16:13:42.124958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.047 [2024-04-24 16:13:42.124968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.047 [2024-04-24 16:13:42.124976] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d2440) on tqpair=0x2472d00 00:17:41.047 ===================================================== 00:17:41.047 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:41.047 ===================================================== 00:17:41.047 Controller Capabilities/Features 00:17:41.048 ================================ 00:17:41.048 Vendor ID: 0000 00:17:41.048 Subsystem Vendor ID: 0000 00:17:41.048 Serial Number: .................... 00:17:41.048 Model Number: ........................................ 00:17:41.048 Firmware Version: 24.05 00:17:41.048 Recommended Arb Burst: 0 00:17:41.048 IEEE OUI Identifier: 00 00 00 00:17:41.048 Multi-path I/O 00:17:41.048 May have multiple subsystem ports: No 00:17:41.048 May have multiple controllers: No 00:17:41.048 Associated with SR-IOV VF: No 00:17:41.048 Max Data Transfer Size: 131072 00:17:41.048 Max Number of Namespaces: 0 00:17:41.048 Max Number of I/O Queues: 1024 00:17:41.048 NVMe Specification Version (VS): 1.3 00:17:41.048 NVMe Specification Version (Identify): 1.3 00:17:41.048 Maximum Queue Entries: 128 00:17:41.048 Contiguous Queues Required: Yes 00:17:41.048 Arbitration Mechanisms Supported 00:17:41.048 Weighted Round Robin: Not Supported 00:17:41.048 Vendor Specific: Not Supported 00:17:41.048 Reset Timeout: 15000 ms 00:17:41.048 Doorbell Stride: 4 bytes 00:17:41.048 NVM Subsystem Reset: Not Supported 00:17:41.048 Command Sets Supported 00:17:41.048 NVM Command Set: Supported 00:17:41.048 Boot Partition: Not Supported 00:17:41.048 Memory Page Size Minimum: 4096 bytes 00:17:41.048 Memory Page Size Maximum: 4096 bytes 00:17:41.048 Persistent Memory Region: Not Supported 00:17:41.048 Optional Asynchronous Events Supported 00:17:41.048 Namespace Attribute Notices: Not Supported 00:17:41.048 Firmware Activation Notices: Not Supported 00:17:41.048 ANA Change Notices: Not Supported 00:17:41.048 PLE Aggregate Log Change Notices: Not Supported 00:17:41.048 LBA Status Info Alert Notices: Not Supported 00:17:41.048 EGE Aggregate Log Change Notices: Not Supported 00:17:41.048 Normal NVM Subsystem Shutdown event: Not Supported 00:17:41.048 Zone Descriptor Change Notices: Not Supported 00:17:41.048 Discovery Log Change Notices: Supported 00:17:41.048 Controller Attributes 00:17:41.048 128-bit Host Identifier: Not Supported 00:17:41.048 Non-Operational Permissive Mode: Not Supported 00:17:41.048 NVM Sets: Not Supported 00:17:41.048 Read Recovery Levels: Not Supported 00:17:41.048 Endurance Groups: Not Supported 00:17:41.048 Predictable Latency Mode: Not Supported 00:17:41.048 Traffic Based Keep ALive: Not Supported 00:17:41.048 Namespace Granularity: Not Supported 00:17:41.048 SQ Associations: Not Supported 00:17:41.048 UUID List: Not Supported 00:17:41.048 Multi-Domain Subsystem: Not Supported 00:17:41.048 Fixed Capacity Management: Not Supported 00:17:41.048 Variable Capacity Management: Not Supported 00:17:41.048 Delete Endurance Group: Not Supported 00:17:41.048 Delete NVM Set: Not Supported 00:17:41.048 Extended LBA Formats Supported: Not Supported 00:17:41.048 Flexible Data Placement Supported: Not Supported 00:17:41.048 00:17:41.048 Controller Memory Buffer Support 00:17:41.048 ================================ 00:17:41.048 Supported: No 00:17:41.048 00:17:41.048 Persistent Memory Region Support 00:17:41.048 ================================ 00:17:41.048 Supported: No 00:17:41.048 00:17:41.048 Admin Command Set Attributes 00:17:41.048 ============================ 00:17:41.048 Security Send/Receive: Not Supported 00:17:41.048 Format NVM: Not Supported 00:17:41.048 Firmware Activate/Download: Not Supported 00:17:41.048 Namespace Management: Not Supported 00:17:41.048 Device Self-Test: Not Supported 00:17:41.048 Directives: Not Supported 00:17:41.048 NVMe-MI: Not Supported 00:17:41.048 Virtualization Management: Not Supported 00:17:41.048 Doorbell Buffer Config: Not Supported 00:17:41.048 Get LBA Status Capability: Not Supported 00:17:41.048 Command & Feature Lockdown Capability: Not Supported 00:17:41.048 Abort Command Limit: 1 00:17:41.048 Async Event Request Limit: 4 00:17:41.048 Number of Firmware Slots: N/A 00:17:41.048 Firmware Slot 1 Read-Only: N/A 00:17:41.048 Firmware Activation Without Reset: N/A 00:17:41.048 Multiple Update Detection Support: N/A 00:17:41.048 Firmware Update Granularity: No Information Provided 00:17:41.048 Per-Namespace SMART Log: No 00:17:41.048 Asymmetric Namespace Access Log Page: Not Supported 00:17:41.048 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:41.048 Command Effects Log Page: Not Supported 00:17:41.048 Get Log Page Extended Data: Supported 00:17:41.048 Telemetry Log Pages: Not Supported 00:17:41.048 Persistent Event Log Pages: Not Supported 00:17:41.048 Supported Log Pages Log Page: May Support 00:17:41.048 Commands Supported & Effects Log Page: Not Supported 00:17:41.048 Feature Identifiers & Effects Log Page:May Support 00:17:41.048 NVMe-MI Commands & Effects Log Page: May Support 00:17:41.048 Data Area 4 for Telemetry Log: Not Supported 00:17:41.048 Error Log Page Entries Supported: 128 00:17:41.048 Keep Alive: Not Supported 00:17:41.048 00:17:41.048 NVM Command Set Attributes 00:17:41.048 ========================== 00:17:41.048 Submission Queue Entry Size 00:17:41.048 Max: 1 00:17:41.048 Min: 1 00:17:41.048 Completion Queue Entry Size 00:17:41.048 Max: 1 00:17:41.048 Min: 1 00:17:41.048 Number of Namespaces: 0 00:17:41.048 Compare Command: Not Supported 00:17:41.048 Write Uncorrectable Command: Not Supported 00:17:41.048 Dataset Management Command: Not Supported 00:17:41.048 Write Zeroes Command: Not Supported 00:17:41.048 Set Features Save Field: Not Supported 00:17:41.048 Reservations: Not Supported 00:17:41.048 Timestamp: Not Supported 00:17:41.048 Copy: Not Supported 00:17:41.048 Volatile Write Cache: Not Present 00:17:41.048 Atomic Write Unit (Normal): 1 00:17:41.048 Atomic Write Unit (PFail): 1 00:17:41.048 Atomic Compare & Write Unit: 1 00:17:41.048 Fused Compare & Write: Supported 00:17:41.048 Scatter-Gather List 00:17:41.048 SGL Command Set: Supported 00:17:41.048 SGL Keyed: Supported 00:17:41.048 SGL Bit Bucket Descriptor: Not Supported 00:17:41.048 SGL Metadata Pointer: Not Supported 00:17:41.048 Oversized SGL: Not Supported 00:17:41.048 SGL Metadata Address: Not Supported 00:17:41.048 SGL Offset: Supported 00:17:41.048 Transport SGL Data Block: Not Supported 00:17:41.048 Replay Protected Memory Block: Not Supported 00:17:41.048 00:17:41.048 Firmware Slot Information 00:17:41.048 ========================= 00:17:41.048 Active slot: 0 00:17:41.048 00:17:41.048 00:17:41.048 Error Log 00:17:41.048 ========= 00:17:41.048 00:17:41.048 Active Namespaces 00:17:41.048 ================= 00:17:41.048 Discovery Log Page 00:17:41.048 ================== 00:17:41.048 Generation Counter: 2 00:17:41.048 Number of Records: 2 00:17:41.048 Record Format: 0 00:17:41.048 00:17:41.048 Discovery Log Entry 0 00:17:41.048 ---------------------- 00:17:41.048 Transport Type: 3 (TCP) 00:17:41.048 Address Family: 1 (IPv4) 00:17:41.048 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:41.048 Entry Flags: 00:17:41.048 Duplicate Returned Information: 1 00:17:41.048 Explicit Persistent Connection Support for Discovery: 1 00:17:41.048 Transport Requirements: 00:17:41.048 Secure Channel: Not Required 00:17:41.048 Port ID: 0 (0x0000) 00:17:41.048 Controller ID: 65535 (0xffff) 00:17:41.048 Admin Max SQ Size: 128 00:17:41.048 Transport Service Identifier: 4420 00:17:41.048 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:41.048 Transport Address: 10.0.0.2 00:17:41.048 Discovery Log Entry 1 00:17:41.048 ---------------------- 00:17:41.048 Transport Type: 3 (TCP) 00:17:41.048 Address Family: 1 (IPv4) 00:17:41.048 Subsystem Type: 2 (NVM Subsystem) 00:17:41.048 Entry Flags: 00:17:41.048 Duplicate Returned Information: 0 00:17:41.048 Explicit Persistent Connection Support for Discovery: 0 00:17:41.048 Transport Requirements: 00:17:41.048 Secure Channel: Not Required 00:17:41.048 Port ID: 0 (0x0000) 00:17:41.048 Controller ID: 65535 (0xffff) 00:17:41.048 Admin Max SQ Size: 128 00:17:41.048 Transport Service Identifier: 4420 00:17:41.048 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:41.048 Transport Address: 10.0.0.2 [2024-04-24 16:13:42.125092] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:41.048 [2024-04-24 16:13:42.125120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.048 [2024-04-24 16:13:42.125135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.048 [2024-04-24 16:13:42.125145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.048 [2024-04-24 16:13:42.125155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.048 [2024-04-24 16:13:42.125168] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.048 [2024-04-24 16:13:42.125177] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.125183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2472d00) 00:17:41.049 [2024-04-24 16:13:42.125195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.125234] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d22e0, cid 3, qid 0 00:17:41.049 [2024-04-24 16:13:42.125436] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.125457] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.125467] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.125477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d22e0) on tqpair=0x2472d00 00:17:41.049 [2024-04-24 16:13:42.125492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.125500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.125506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2472d00) 00:17:41.049 [2024-04-24 16:13:42.125517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.125547] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d22e0, cid 3, qid 0 00:17:41.049 [2024-04-24 16:13:42.125692] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.125709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.125716] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.125723] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d22e0) on tqpair=0x2472d00 00:17:41.049 [2024-04-24 16:13:42.125736] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:41.049 [2024-04-24 16:13:42.129772] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:41.049 [2024-04-24 16:13:42.129796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.129807] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.129817] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2472d00) 00:17:41.049 [2024-04-24 16:13:42.129829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.129852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d22e0, cid 3, qid 0 00:17:41.049 [2024-04-24 16:13:42.130030] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.130047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.130054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.130061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d22e0) on tqpair=0x2472d00 00:17:41.049 [2024-04-24 16:13:42.130078] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:17:41.049 00:17:41.049 16:13:42 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:41.049 [2024-04-24 16:13:42.163662] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:41.049 [2024-04-24 16:13:42.163709] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434866 ] 00:17:41.049 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.049 [2024-04-24 16:13:42.195481] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:41.049 [2024-04-24 16:13:42.195528] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:41.049 [2024-04-24 16:13:42.195538] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:41.049 [2024-04-24 16:13:42.195551] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:41.049 [2024-04-24 16:13:42.195562] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:41.049 [2024-04-24 16:13:42.195767] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:41.049 [2024-04-24 16:13:42.195811] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5bfd00 0 00:17:41.049 [2024-04-24 16:13:42.209759] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:41.049 [2024-04-24 16:13:42.209779] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:41.049 [2024-04-24 16:13:42.209787] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:41.049 [2024-04-24 16:13:42.209794] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:41.049 [2024-04-24 16:13:42.209833] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.209845] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.209852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.049 [2024-04-24 16:13:42.209865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:41.049 [2024-04-24 16:13:42.209891] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.049 [2024-04-24 16:13:42.217768] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.217787] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.217796] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.217803] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.049 [2024-04-24 16:13:42.217822] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:41.049 [2024-04-24 16:13:42.217836] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:41.049 [2024-04-24 16:13:42.217846] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:41.049 [2024-04-24 16:13:42.217863] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.217871] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.217878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.049 [2024-04-24 16:13:42.217889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.217913] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.049 [2024-04-24 16:13:42.218081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.218098] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.218105] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218112] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.049 [2024-04-24 16:13:42.218121] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:41.049 [2024-04-24 16:13:42.218135] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:41.049 [2024-04-24 16:13:42.218151] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218158] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218165] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.049 [2024-04-24 16:13:42.218176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.218198] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.049 [2024-04-24 16:13:42.218331] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.218348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.218355] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218362] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.049 [2024-04-24 16:13:42.218370] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:41.049 [2024-04-24 16:13:42.218385] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:41.049 [2024-04-24 16:13:42.218400] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218407] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.049 [2024-04-24 16:13:42.218425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.218446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.049 [2024-04-24 16:13:42.218571] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.218587] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.218594] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.049 [2024-04-24 16:13:42.218614] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:41.049 [2024-04-24 16:13:42.218634] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.049 [2024-04-24 16:13:42.218662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.049 [2024-04-24 16:13:42.218684] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.049 [2024-04-24 16:13:42.218831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.049 [2024-04-24 16:13:42.218848] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.049 [2024-04-24 16:13:42.218855] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.049 [2024-04-24 16:13:42.218862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.049 [2024-04-24 16:13:42.218870] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:41.050 [2024-04-24 16:13:42.218879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:41.050 [2024-04-24 16:13:42.218893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:41.050 [2024-04-24 16:13:42.219006] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:41.050 [2024-04-24 16:13:42.219014] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:41.050 [2024-04-24 16:13:42.219025] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219033] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219039] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.219065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.050 [2024-04-24 16:13:42.219086] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.050 [2024-04-24 16:13:42.219280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.050 [2024-04-24 16:13:42.219296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.050 [2024-04-24 16:13:42.219304] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.050 [2024-04-24 16:13:42.219319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:41.050 [2024-04-24 16:13:42.219337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219348] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.219366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.050 [2024-04-24 16:13:42.219387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.050 [2024-04-24 16:13:42.219512] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.050 [2024-04-24 16:13:42.219530] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.050 [2024-04-24 16:13:42.219539] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.050 [2024-04-24 16:13:42.219557] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:41.050 [2024-04-24 16:13:42.219567] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.219582] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:41.050 [2024-04-24 16:13:42.219602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.219619] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219628] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.219639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.050 [2024-04-24 16:13:42.219661] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.050 [2024-04-24 16:13:42.219928] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.050 [2024-04-24 16:13:42.219948] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.050 [2024-04-24 16:13:42.219959] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.219970] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=4096, cccid=0 00:17:41.050 [2024-04-24 16:13:42.219981] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61eec0) on tqpair(0x5bfd00): expected_datao=0, payload_size=4096 00:17:41.050 [2024-04-24 16:13:42.219995] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220017] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220026] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.050 [2024-04-24 16:13:42.220120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.050 [2024-04-24 16:13:42.220127] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.050 [2024-04-24 16:13:42.220145] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:41.050 [2024-04-24 16:13:42.220153] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:41.050 [2024-04-24 16:13:42.220160] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:41.050 [2024-04-24 16:13:42.220167] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:41.050 [2024-04-24 16:13:42.220174] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:41.050 [2024-04-24 16:13:42.220182] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.220197] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.220212] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220220] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220226] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.050 [2024-04-24 16:13:42.220259] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.050 [2024-04-24 16:13:42.220434] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.050 [2024-04-24 16:13:42.220451] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.050 [2024-04-24 16:13:42.220458] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220465] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61eec0) on tqpair=0x5bfd00 00:17:41.050 [2024-04-24 16:13:42.220476] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220484] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.050 [2024-04-24 16:13:42.220511] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220517] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220524] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.050 [2024-04-24 16:13:42.220542] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220549] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220556] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.050 [2024-04-24 16:13:42.220590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.050 [2024-04-24 16:13:42.220619] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.220653] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.220666] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.220673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.220683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.050 [2024-04-24 16:13:42.220705] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61eec0, cid 0, qid 0 00:17:41.050 [2024-04-24 16:13:42.220730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f020, cid 1, qid 0 00:17:41.050 [2024-04-24 16:13:42.220738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f180, cid 2, qid 0 00:17:41.050 [2024-04-24 16:13:42.220757] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.050 [2024-04-24 16:13:42.220766] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.050 [2024-04-24 16:13:42.221014] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.050 [2024-04-24 16:13:42.221030] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.050 [2024-04-24 16:13:42.221038] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.221046] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.050 [2024-04-24 16:13:42.221060] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:41.050 [2024-04-24 16:13:42.221070] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.221088] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.221117] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:41.050 [2024-04-24 16:13:42.221128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.221136] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.050 [2024-04-24 16:13:42.221142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.050 [2024-04-24 16:13:42.221152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.050 [2024-04-24 16:13:42.221172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.051 [2024-04-24 16:13:42.221405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.051 [2024-04-24 16:13:42.221422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.051 [2024-04-24 16:13:42.221429] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.221436] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.051 [2024-04-24 16:13:42.221489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.221510] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.221526] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.221534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.051 [2024-04-24 16:13:42.221545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.051 [2024-04-24 16:13:42.221580] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.051 [2024-04-24 16:13:42.225760] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.051 [2024-04-24 16:13:42.225777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.051 [2024-04-24 16:13:42.225785] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.225791] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=4096, cccid=4 00:17:41.051 [2024-04-24 16:13:42.225798] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f440) on tqpair(0x5bfd00): expected_datao=0, payload_size=4096 00:17:41.051 [2024-04-24 16:13:42.225806] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.225816] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.225823] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.265764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.051 [2024-04-24 16:13:42.265783] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.051 [2024-04-24 16:13:42.265791] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.265798] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.051 [2024-04-24 16:13:42.265814] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:41.051 [2024-04-24 16:13:42.265832] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.265854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.265872] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.265880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.051 [2024-04-24 16:13:42.265891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.051 [2024-04-24 16:13:42.265914] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.051 [2024-04-24 16:13:42.266105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.051 [2024-04-24 16:13:42.266125] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.051 [2024-04-24 16:13:42.266136] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.266147] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=4096, cccid=4 00:17:41.051 [2024-04-24 16:13:42.266159] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f440) on tqpair(0x5bfd00): expected_datao=0, payload_size=4096 00:17:41.051 [2024-04-24 16:13:42.266171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.266190] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.266199] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.306952] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.051 [2024-04-24 16:13:42.306972] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.051 [2024-04-24 16:13:42.306979] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.306986] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.051 [2024-04-24 16:13:42.307008] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.307039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:41.051 [2024-04-24 16:13:42.307056] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.307064] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.051 [2024-04-24 16:13:42.307076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.051 [2024-04-24 16:13:42.307100] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.051 [2024-04-24 16:13:42.307244] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.051 [2024-04-24 16:13:42.307264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.051 [2024-04-24 16:13:42.307276] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.307286] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=4096, cccid=4 00:17:41.051 [2024-04-24 16:13:42.307298] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f440) on tqpair(0x5bfd00): expected_datao=0, payload_size=4096 00:17:41.051 [2024-04-24 16:13:42.307309] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.307329] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.051 [2024-04-24 16:13:42.307338] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.314 [2024-04-24 16:13:42.347972] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.314 [2024-04-24 16:13:42.347993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.314 [2024-04-24 16:13:42.348002] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.314 [2024-04-24 16:13:42.348009] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.314 [2024-04-24 16:13:42.348041] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348079] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348089] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348107] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:41.314 [2024-04-24 16:13:42.348115] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:41.314 [2024-04-24 16:13:42.348123] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:41.314 [2024-04-24 16:13:42.348142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.314 [2024-04-24 16:13:42.348151] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.314 [2024-04-24 16:13:42.348163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.314 [2024-04-24 16:13:42.348175] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.314 [2024-04-24 16:13:42.348182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.314 [2024-04-24 16:13:42.348188] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5bfd00) 00:17:41.314 [2024-04-24 16:13:42.348198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.314 [2024-04-24 16:13:42.348225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.315 [2024-04-24 16:13:42.348237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f5a0, cid 5, qid 0 00:17:41.315 [2024-04-24 16:13:42.348398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.348414] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.348421] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.348428] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.348439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.348448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.348455] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.348462] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f5a0) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.348479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.348490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.348501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.348538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f5a0, cid 5, qid 0 00:17:41.315 [2024-04-24 16:13:42.348748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.348765] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.348773] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.348780] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f5a0) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.348803] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.348814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.348825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.348847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f5a0, cid 5, qid 0 00:17:41.315 [2024-04-24 16:13:42.348980] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.348997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.349004] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349011] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f5a0) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.349029] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349041] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.349052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.349073] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f5a0, cid 5, qid 0 00:17:41.315 [2024-04-24 16:13:42.349225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.349242] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.349249] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f5a0) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.349280] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349291] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.349303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.349319] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349327] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.349337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.349364] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349371] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.349381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.349392] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.349400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5bfd00) 00:17:41.315 [2024-04-24 16:13:42.349424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.315 [2024-04-24 16:13:42.349445] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f5a0, cid 5, qid 0 00:17:41.315 [2024-04-24 16:13:42.349456] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f440, cid 4, qid 0 00:17:41.315 [2024-04-24 16:13:42.349464] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f700, cid 6, qid 0 00:17:41.315 [2024-04-24 16:13:42.349486] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f860, cid 7, qid 0 00:17:41.315 [2024-04-24 16:13:42.353772] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.315 [2024-04-24 16:13:42.353790] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.315 [2024-04-24 16:13:42.353798] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353805] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=8192, cccid=5 00:17:41.315 [2024-04-24 16:13:42.353813] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f5a0) on tqpair(0x5bfd00): expected_datao=0, payload_size=8192 00:17:41.315 [2024-04-24 16:13:42.353820] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353831] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353839] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.315 [2024-04-24 16:13:42.353858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.315 [2024-04-24 16:13:42.353864] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353871] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=512, cccid=4 00:17:41.315 [2024-04-24 16:13:42.353879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f440) on tqpair(0x5bfd00): expected_datao=0, payload_size=512 00:17:41.315 [2024-04-24 16:13:42.353886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353896] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353904] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.315 [2024-04-24 16:13:42.353921] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.315 [2024-04-24 16:13:42.353928] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353934] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=512, cccid=6 00:17:41.315 [2024-04-24 16:13:42.353942] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f700) on tqpair(0x5bfd00): expected_datao=0, payload_size=512 00:17:41.315 [2024-04-24 16:13:42.353950] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353959] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353967] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353976] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:41.315 [2024-04-24 16:13:42.353984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:41.315 [2024-04-24 16:13:42.353991] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.353997] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5bfd00): datao=0, datal=4096, cccid=7 00:17:41.315 [2024-04-24 16:13:42.354005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61f860) on tqpair(0x5bfd00): expected_datao=0, payload_size=4096 00:17:41.315 [2024-04-24 16:13:42.354012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354022] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354030] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.354047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.354054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354072] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f5a0) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.354108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.354119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.354125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f440) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.354148] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.354158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.354164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354171] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f700) on tqpair=0x5bfd00 00:17:41.315 [2024-04-24 16:13:42.354181] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.315 [2024-04-24 16:13:42.354190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.315 [2024-04-24 16:13:42.354196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.315 [2024-04-24 16:13:42.354202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f860) on tqpair=0x5bfd00 00:17:41.315 ===================================================== 00:17:41.315 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.315 ===================================================== 00:17:41.315 Controller Capabilities/Features 00:17:41.315 ================================ 00:17:41.315 Vendor ID: 8086 00:17:41.315 Subsystem Vendor ID: 8086 00:17:41.315 Serial Number: SPDK00000000000001 00:17:41.315 Model Number: SPDK bdev Controller 00:17:41.315 Firmware Version: 24.05 00:17:41.315 Recommended Arb Burst: 6 00:17:41.315 IEEE OUI Identifier: e4 d2 5c 00:17:41.315 Multi-path I/O 00:17:41.316 May have multiple subsystem ports: Yes 00:17:41.316 May have multiple controllers: Yes 00:17:41.316 Associated with SR-IOV VF: No 00:17:41.316 Max Data Transfer Size: 131072 00:17:41.316 Max Number of Namespaces: 32 00:17:41.316 Max Number of I/O Queues: 127 00:17:41.316 NVMe Specification Version (VS): 1.3 00:17:41.316 NVMe Specification Version (Identify): 1.3 00:17:41.316 Maximum Queue Entries: 128 00:17:41.316 Contiguous Queues Required: Yes 00:17:41.316 Arbitration Mechanisms Supported 00:17:41.316 Weighted Round Robin: Not Supported 00:17:41.316 Vendor Specific: Not Supported 00:17:41.316 Reset Timeout: 15000 ms 00:17:41.316 Doorbell Stride: 4 bytes 00:17:41.316 NVM Subsystem Reset: Not Supported 00:17:41.316 Command Sets Supported 00:17:41.316 NVM Command Set: Supported 00:17:41.316 Boot Partition: Not Supported 00:17:41.316 Memory Page Size Minimum: 4096 bytes 00:17:41.316 Memory Page Size Maximum: 4096 bytes 00:17:41.316 Persistent Memory Region: Not Supported 00:17:41.316 Optional Asynchronous Events Supported 00:17:41.316 Namespace Attribute Notices: Supported 00:17:41.316 Firmware Activation Notices: Not Supported 00:17:41.316 ANA Change Notices: Not Supported 00:17:41.316 PLE Aggregate Log Change Notices: Not Supported 00:17:41.316 LBA Status Info Alert Notices: Not Supported 00:17:41.316 EGE Aggregate Log Change Notices: Not Supported 00:17:41.316 Normal NVM Subsystem Shutdown event: Not Supported 00:17:41.316 Zone Descriptor Change Notices: Not Supported 00:17:41.316 Discovery Log Change Notices: Not Supported 00:17:41.316 Controller Attributes 00:17:41.316 128-bit Host Identifier: Supported 00:17:41.316 Non-Operational Permissive Mode: Not Supported 00:17:41.316 NVM Sets: Not Supported 00:17:41.316 Read Recovery Levels: Not Supported 00:17:41.316 Endurance Groups: Not Supported 00:17:41.316 Predictable Latency Mode: Not Supported 00:17:41.316 Traffic Based Keep ALive: Not Supported 00:17:41.316 Namespace Granularity: Not Supported 00:17:41.316 SQ Associations: Not Supported 00:17:41.316 UUID List: Not Supported 00:17:41.316 Multi-Domain Subsystem: Not Supported 00:17:41.316 Fixed Capacity Management: Not Supported 00:17:41.316 Variable Capacity Management: Not Supported 00:17:41.316 Delete Endurance Group: Not Supported 00:17:41.316 Delete NVM Set: Not Supported 00:17:41.316 Extended LBA Formats Supported: Not Supported 00:17:41.316 Flexible Data Placement Supported: Not Supported 00:17:41.316 00:17:41.316 Controller Memory Buffer Support 00:17:41.316 ================================ 00:17:41.316 Supported: No 00:17:41.316 00:17:41.316 Persistent Memory Region Support 00:17:41.316 ================================ 00:17:41.316 Supported: No 00:17:41.316 00:17:41.316 Admin Command Set Attributes 00:17:41.316 ============================ 00:17:41.316 Security Send/Receive: Not Supported 00:17:41.316 Format NVM: Not Supported 00:17:41.316 Firmware Activate/Download: Not Supported 00:17:41.316 Namespace Management: Not Supported 00:17:41.316 Device Self-Test: Not Supported 00:17:41.316 Directives: Not Supported 00:17:41.316 NVMe-MI: Not Supported 00:17:41.316 Virtualization Management: Not Supported 00:17:41.316 Doorbell Buffer Config: Not Supported 00:17:41.316 Get LBA Status Capability: Not Supported 00:17:41.316 Command & Feature Lockdown Capability: Not Supported 00:17:41.316 Abort Command Limit: 4 00:17:41.316 Async Event Request Limit: 4 00:17:41.316 Number of Firmware Slots: N/A 00:17:41.316 Firmware Slot 1 Read-Only: N/A 00:17:41.316 Firmware Activation Without Reset: N/A 00:17:41.316 Multiple Update Detection Support: N/A 00:17:41.316 Firmware Update Granularity: No Information Provided 00:17:41.316 Per-Namespace SMART Log: No 00:17:41.316 Asymmetric Namespace Access Log Page: Not Supported 00:17:41.316 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:41.316 Command Effects Log Page: Supported 00:17:41.316 Get Log Page Extended Data: Supported 00:17:41.316 Telemetry Log Pages: Not Supported 00:17:41.316 Persistent Event Log Pages: Not Supported 00:17:41.316 Supported Log Pages Log Page: May Support 00:17:41.316 Commands Supported & Effects Log Page: Not Supported 00:17:41.316 Feature Identifiers & Effects Log Page:May Support 00:17:41.316 NVMe-MI Commands & Effects Log Page: May Support 00:17:41.316 Data Area 4 for Telemetry Log: Not Supported 00:17:41.316 Error Log Page Entries Supported: 128 00:17:41.316 Keep Alive: Supported 00:17:41.316 Keep Alive Granularity: 10000 ms 00:17:41.316 00:17:41.316 NVM Command Set Attributes 00:17:41.316 ========================== 00:17:41.316 Submission Queue Entry Size 00:17:41.316 Max: 64 00:17:41.316 Min: 64 00:17:41.316 Completion Queue Entry Size 00:17:41.316 Max: 16 00:17:41.316 Min: 16 00:17:41.316 Number of Namespaces: 32 00:17:41.316 Compare Command: Supported 00:17:41.316 Write Uncorrectable Command: Not Supported 00:17:41.316 Dataset Management Command: Supported 00:17:41.316 Write Zeroes Command: Supported 00:17:41.316 Set Features Save Field: Not Supported 00:17:41.316 Reservations: Supported 00:17:41.316 Timestamp: Not Supported 00:17:41.316 Copy: Supported 00:17:41.316 Volatile Write Cache: Present 00:17:41.316 Atomic Write Unit (Normal): 1 00:17:41.316 Atomic Write Unit (PFail): 1 00:17:41.316 Atomic Compare & Write Unit: 1 00:17:41.316 Fused Compare & Write: Supported 00:17:41.316 Scatter-Gather List 00:17:41.316 SGL Command Set: Supported 00:17:41.316 SGL Keyed: Supported 00:17:41.316 SGL Bit Bucket Descriptor: Not Supported 00:17:41.316 SGL Metadata Pointer: Not Supported 00:17:41.316 Oversized SGL: Not Supported 00:17:41.316 SGL Metadata Address: Not Supported 00:17:41.316 SGL Offset: Supported 00:17:41.316 Transport SGL Data Block: Not Supported 00:17:41.316 Replay Protected Memory Block: Not Supported 00:17:41.316 00:17:41.316 Firmware Slot Information 00:17:41.316 ========================= 00:17:41.316 Active slot: 1 00:17:41.316 Slot 1 Firmware Revision: 24.05 00:17:41.316 00:17:41.316 00:17:41.316 Commands Supported and Effects 00:17:41.316 ============================== 00:17:41.316 Admin Commands 00:17:41.316 -------------- 00:17:41.316 Get Log Page (02h): Supported 00:17:41.316 Identify (06h): Supported 00:17:41.316 Abort (08h): Supported 00:17:41.316 Set Features (09h): Supported 00:17:41.316 Get Features (0Ah): Supported 00:17:41.316 Asynchronous Event Request (0Ch): Supported 00:17:41.316 Keep Alive (18h): Supported 00:17:41.316 I/O Commands 00:17:41.316 ------------ 00:17:41.316 Flush (00h): Supported LBA-Change 00:17:41.316 Write (01h): Supported LBA-Change 00:17:41.316 Read (02h): Supported 00:17:41.316 Compare (05h): Supported 00:17:41.316 Write Zeroes (08h): Supported LBA-Change 00:17:41.316 Dataset Management (09h): Supported LBA-Change 00:17:41.316 Copy (19h): Supported LBA-Change 00:17:41.316 Unknown (79h): Supported LBA-Change 00:17:41.316 Unknown (7Ah): Supported 00:17:41.316 00:17:41.316 Error Log 00:17:41.316 ========= 00:17:41.316 00:17:41.316 Arbitration 00:17:41.316 =========== 00:17:41.316 Arbitration Burst: 1 00:17:41.316 00:17:41.316 Power Management 00:17:41.316 ================ 00:17:41.316 Number of Power States: 1 00:17:41.316 Current Power State: Power State #0 00:17:41.316 Power State #0: 00:17:41.316 Max Power: 0.00 W 00:17:41.316 Non-Operational State: Operational 00:17:41.316 Entry Latency: Not Reported 00:17:41.316 Exit Latency: Not Reported 00:17:41.316 Relative Read Throughput: 0 00:17:41.316 Relative Read Latency: 0 00:17:41.316 Relative Write Throughput: 0 00:17:41.316 Relative Write Latency: 0 00:17:41.316 Idle Power: Not Reported 00:17:41.316 Active Power: Not Reported 00:17:41.316 Non-Operational Permissive Mode: Not Supported 00:17:41.316 00:17:41.316 Health Information 00:17:41.316 ================== 00:17:41.316 Critical Warnings: 00:17:41.316 Available Spare Space: OK 00:17:41.316 Temperature: OK 00:17:41.316 Device Reliability: OK 00:17:41.316 Read Only: No 00:17:41.316 Volatile Memory Backup: OK 00:17:41.316 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:41.316 Temperature Threshold: [2024-04-24 16:13:42.354316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.316 [2024-04-24 16:13:42.354328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5bfd00) 00:17:41.316 [2024-04-24 16:13:42.354339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.316 [2024-04-24 16:13:42.354361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f860, cid 7, qid 0 00:17:41.316 [2024-04-24 16:13:42.354558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.316 [2024-04-24 16:13:42.354574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.316 [2024-04-24 16:13:42.354581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.354588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f860) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.354630] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:41.317 [2024-04-24 16:13:42.354655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.317 [2024-04-24 16:13:42.354668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.317 [2024-04-24 16:13:42.354693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.317 [2024-04-24 16:13:42.354705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.317 [2024-04-24 16:13:42.354718] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.354725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.354731] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.354767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.354790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.354958] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.354975] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.354982] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.354989] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.355001] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.355026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.355059] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.355193] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.355209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.355217] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355224] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.355232] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:41.317 [2024-04-24 16:13:42.355239] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:41.317 [2024-04-24 16:13:42.355257] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355268] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.355285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.355306] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.355440] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.355456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.355463] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355470] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.355489] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.355518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.355539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.355668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.355684] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.355691] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.355716] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355727] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355734] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.355751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.355776] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.355906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.355922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.355929] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355936] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.355955] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355966] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.355972] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.355987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.356009] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.356138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.356154] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.356161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.356187] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356204] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.356215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.356237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.356364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.356380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.356387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356394] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.356413] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356424] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356430] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.356441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.356462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.356589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.356604] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.356612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356619] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.356637] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.356654] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.356665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.356687] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.360757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.360774] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.360781] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.360803] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.360822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.360834] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.360840] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5bfd00) 00:17:41.317 [2024-04-24 16:13:42.360855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.317 [2024-04-24 16:13:42.360879] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61f2e0, cid 3, qid 0 00:17:41.317 [2024-04-24 16:13:42.361043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:41.317 [2024-04-24 16:13:42.361059] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:41.317 [2024-04-24 16:13:42.361067] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:41.317 [2024-04-24 16:13:42.361073] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x61f2e0) on tqpair=0x5bfd00 00:17:41.317 [2024-04-24 16:13:42.361088] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:17:41.317 0 Kelvin (-273 Celsius) 00:17:41.317 Available Spare: 0% 00:17:41.317 Available Spare Threshold: 0% 00:17:41.317 Life Percentage Used: 0% 00:17:41.317 Data Units Read: 0 00:17:41.317 Data Units Written: 0 00:17:41.317 Host Read Commands: 0 00:17:41.317 Host Write Commands: 0 00:17:41.318 Controller Busy Time: 0 minutes 00:17:41.318 Power Cycles: 0 00:17:41.318 Power On Hours: 0 hours 00:17:41.318 Unsafe Shutdowns: 0 00:17:41.318 Unrecoverable Media Errors: 0 00:17:41.318 Lifetime Error Log Entries: 0 00:17:41.318 Warning Temperature Time: 0 minutes 00:17:41.318 Critical Temperature Time: 0 minutes 00:17:41.318 00:17:41.318 Number of Queues 00:17:41.318 ================ 00:17:41.318 Number of I/O Submission Queues: 127 00:17:41.318 Number of I/O Completion Queues: 127 00:17:41.318 00:17:41.318 Active Namespaces 00:17:41.318 ================= 00:17:41.318 Namespace ID:1 00:17:41.318 Error Recovery Timeout: Unlimited 00:17:41.318 Command Set Identifier: NVM (00h) 00:17:41.318 Deallocate: Supported 00:17:41.318 Deallocated/Unwritten Error: Not Supported 00:17:41.318 Deallocated Read Value: Unknown 00:17:41.318 Deallocate in Write Zeroes: Not Supported 00:17:41.318 Deallocated Guard Field: 0xFFFF 00:17:41.318 Flush: Supported 00:17:41.318 Reservation: Supported 00:17:41.318 Namespace Sharing Capabilities: Multiple Controllers 00:17:41.318 Size (in LBAs): 131072 (0GiB) 00:17:41.318 Capacity (in LBAs): 131072 (0GiB) 00:17:41.318 Utilization (in LBAs): 131072 (0GiB) 00:17:41.318 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:41.318 EUI64: ABCDEF0123456789 00:17:41.318 UUID: fa63edf9-2918-4487-b663-94275136b105 00:17:41.318 Thin Provisioning: Not Supported 00:17:41.318 Per-NS Atomic Units: Yes 00:17:41.318 Atomic Boundary Size (Normal): 0 00:17:41.318 Atomic Boundary Size (PFail): 0 00:17:41.318 Atomic Boundary Offset: 0 00:17:41.318 Maximum Single Source Range Length: 65535 00:17:41.318 Maximum Copy Length: 65535 00:17:41.318 Maximum Source Range Count: 1 00:17:41.318 NGUID/EUI64 Never Reused: No 00:17:41.318 Namespace Write Protected: No 00:17:41.318 Number of LBA Formats: 1 00:17:41.318 Current LBA Format: LBA Format #00 00:17:41.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:41.318 00:17:41.318 16:13:42 -- host/identify.sh@51 -- # sync 00:17:41.318 16:13:42 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.318 16:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.318 16:13:42 -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 16:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.318 16:13:42 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:41.318 16:13:42 -- host/identify.sh@56 -- # nvmftestfini 00:17:41.318 16:13:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:41.318 16:13:42 -- nvmf/common.sh@117 -- # sync 00:17:41.318 16:13:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.318 16:13:42 -- nvmf/common.sh@120 -- # set +e 00:17:41.318 16:13:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.318 16:13:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.318 rmmod nvme_tcp 00:17:41.318 rmmod nvme_fabrics 00:17:41.318 rmmod nvme_keyring 00:17:41.318 16:13:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.318 16:13:42 -- nvmf/common.sh@124 -- # set -e 00:17:41.318 16:13:42 -- nvmf/common.sh@125 -- # return 0 00:17:41.318 16:13:42 -- nvmf/common.sh@478 -- # '[' -n 3434825 ']' 00:17:41.318 16:13:42 -- nvmf/common.sh@479 -- # killprocess 3434825 00:17:41.318 16:13:42 -- common/autotest_common.sh@936 -- # '[' -z 3434825 ']' 00:17:41.318 16:13:42 -- common/autotest_common.sh@940 -- # kill -0 3434825 00:17:41.318 16:13:42 -- common/autotest_common.sh@941 -- # uname 00:17:41.318 16:13:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.318 16:13:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3434825 00:17:41.318 16:13:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:41.318 16:13:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:41.318 16:13:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3434825' 00:17:41.318 killing process with pid 3434825 00:17:41.318 16:13:42 -- common/autotest_common.sh@955 -- # kill 3434825 00:17:41.318 [2024-04-24 16:13:42.444395] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:41.318 16:13:42 -- common/autotest_common.sh@960 -- # wait 3434825 00:17:41.576 16:13:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:41.576 16:13:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:41.576 16:13:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:41.576 16:13:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.576 16:13:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.577 16:13:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.577 16:13:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.577 16:13:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.114 16:13:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.114 00:17:44.115 real 0m5.402s 00:17:44.115 user 0m4.621s 00:17:44.115 sys 0m1.821s 00:17:44.115 16:13:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:44.115 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:44.115 ************************************ 00:17:44.115 END TEST nvmf_identify 00:17:44.115 ************************************ 00:17:44.115 16:13:44 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:44.115 16:13:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.115 16:13:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.115 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:44.115 ************************************ 00:17:44.115 START TEST nvmf_perf 00:17:44.115 ************************************ 00:17:44.115 16:13:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:44.115 * Looking for test storage... 00:17:44.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:44.115 16:13:44 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.115 16:13:44 -- nvmf/common.sh@7 -- # uname -s 00:17:44.115 16:13:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.115 16:13:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.115 16:13:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.115 16:13:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.115 16:13:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.115 16:13:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.115 16:13:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.115 16:13:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.115 16:13:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.115 16:13:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.115 16:13:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.115 16:13:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.115 16:13:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.115 16:13:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.115 16:13:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.115 16:13:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.115 16:13:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.115 16:13:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.115 16:13:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.115 16:13:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.115 16:13:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.115 16:13:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.115 16:13:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.115 16:13:44 -- paths/export.sh@5 -- # export PATH 00:17:44.115 16:13:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.115 16:13:44 -- nvmf/common.sh@47 -- # : 0 00:17:44.115 16:13:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.115 16:13:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.115 16:13:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.115 16:13:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.115 16:13:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.115 16:13:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.115 16:13:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.115 16:13:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.115 16:13:44 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:44.115 16:13:44 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:44.115 16:13:44 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.115 16:13:44 -- host/perf.sh@17 -- # nvmftestinit 00:17:44.115 16:13:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:44.115 16:13:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.115 16:13:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:44.115 16:13:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:44.115 16:13:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:44.115 16:13:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.115 16:13:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.115 16:13:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.115 16:13:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:44.115 16:13:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:44.115 16:13:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.115 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:46.018 16:13:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:46.018 16:13:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.018 16:13:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.018 16:13:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.018 16:13:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.018 16:13:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.018 16:13:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.018 16:13:46 -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.018 16:13:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.018 16:13:46 -- nvmf/common.sh@296 -- # e810=() 00:17:46.018 16:13:46 -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.018 16:13:46 -- nvmf/common.sh@297 -- # x722=() 00:17:46.018 16:13:46 -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.018 16:13:46 -- nvmf/common.sh@298 -- # mlx=() 00:17:46.018 16:13:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.018 16:13:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.018 16:13:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.018 16:13:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:46.018 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:46.018 16:13:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.018 16:13:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:46.018 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:46.018 16:13:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.018 16:13:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.018 16:13:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.018 16:13:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:46.018 Found net devices under 0000:09:00.0: cvl_0_0 00:17:46.018 16:13:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.018 16:13:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.018 16:13:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.018 16:13:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:46.018 Found net devices under 0000:09:00.1: cvl_0_1 00:17:46.018 16:13:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:46.018 16:13:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:46.018 16:13:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.018 16:13:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.018 16:13:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.018 16:13:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.018 16:13:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.018 16:13:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.018 16:13:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.018 16:13:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.018 16:13:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.018 16:13:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.018 16:13:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.018 16:13:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.018 16:13:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.018 16:13:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.018 16:13:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.018 16:13:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.018 16:13:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.018 16:13:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.018 16:13:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:17:46.018 00:17:46.018 --- 10.0.0.2 ping statistics --- 00:17:46.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.018 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:17:46.018 16:13:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:17:46.018 00:17:46.018 --- 10.0.0.1 ping statistics --- 00:17:46.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.018 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:46.018 16:13:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.018 16:13:46 -- nvmf/common.sh@411 -- # return 0 00:17:46.018 16:13:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:46.018 16:13:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.018 16:13:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:46.018 16:13:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.018 16:13:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:46.018 16:13:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:46.018 16:13:46 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:46.018 16:13:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:46.018 16:13:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.018 16:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:46.018 16:13:47 -- nvmf/common.sh@470 -- # nvmfpid=3436909 00:17:46.019 16:13:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.019 16:13:47 -- nvmf/common.sh@471 -- # waitforlisten 3436909 00:17:46.019 16:13:47 -- common/autotest_common.sh@817 -- # '[' -z 3436909 ']' 00:17:46.019 16:13:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.019 16:13:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.019 16:13:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.019 16:13:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.019 16:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:46.019 [2024-04-24 16:13:47.049913] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:17:46.019 [2024-04-24 16:13:47.050008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.019 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.019 [2024-04-24 16:13:47.120559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.019 [2024-04-24 16:13:47.229864] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.019 [2024-04-24 16:13:47.229915] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.019 [2024-04-24 16:13:47.229930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.019 [2024-04-24 16:13:47.229943] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.019 [2024-04-24 16:13:47.229954] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.019 [2024-04-24 16:13:47.230013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.019 [2024-04-24 16:13:47.230077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.019 [2024-04-24 16:13:47.230134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.019 [2024-04-24 16:13:47.230137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.294 16:13:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.294 16:13:47 -- common/autotest_common.sh@850 -- # return 0 00:17:46.294 16:13:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:46.294 16:13:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.294 16:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 16:13:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.294 16:13:47 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:46.294 16:13:47 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:49.593 16:13:50 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:49.593 16:13:50 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:49.593 16:13:50 -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:17:49.593 16:13:50 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.852 16:13:50 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:49.852 16:13:50 -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:17:49.852 16:13:50 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:49.852 16:13:50 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:49.852 16:13:50 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.109 [2024-04-24 16:13:51.196122] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.109 16:13:51 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.367 16:13:51 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:50.367 16:13:51 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.625 16:13:51 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:50.625 16:13:51 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:50.884 16:13:51 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.143 [2024-04-24 16:13:52.191670] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.143 16:13:52 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:51.402 16:13:52 -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:17:51.402 16:13:52 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:17:51.402 16:13:52 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:51.402 16:13:52 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:17:52.779 Initializing NVMe Controllers 00:17:52.779 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:17:52.779 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:17:52.779 Initialization complete. Launching workers. 00:17:52.779 ======================================================== 00:17:52.779 Latency(us) 00:17:52.779 Device Information : IOPS MiB/s Average min max 00:17:52.779 PCIE (0000:0b:00.0) NSID 1 from core 0: 85825.27 335.25 372.40 31.56 5704.64 00:17:52.779 ======================================================== 00:17:52.779 Total : 85825.27 335.25 372.40 31.56 5704.64 00:17:52.779 00:17:52.779 16:13:53 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:52.779 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.760 Initializing NVMe Controllers 00:17:53.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:53.760 Initialization complete. Launching workers. 00:17:53.760 ======================================================== 00:17:53.760 Latency(us) 00:17:53.760 Device Information : IOPS MiB/s Average min max 00:17:53.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10261.61 192.86 46053.06 00:17:53.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19694.67 7952.84 47899.10 00:17:53.760 ======================================================== 00:17:53.760 Total : 149.00 0.58 13490.38 192.86 47899.10 00:17:53.760 00:17:53.760 16:13:54 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:54.048 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.423 Initializing NVMe Controllers 00:17:55.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:55.423 Initialization complete. Launching workers. 00:17:55.423 ======================================================== 00:17:55.423 Latency(us) 00:17:55.423 Device Information : IOPS MiB/s Average min max 00:17:55.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8248.85 32.22 3878.87 485.52 8205.21 00:17:55.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3764.78 14.71 8525.20 6021.60 16801.89 00:17:55.423 ======================================================== 00:17:55.423 Total : 12013.64 46.93 5334.92 485.52 16801.89 00:17:55.423 00:17:55.423 16:13:56 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:55.423 16:13:56 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:55.423 16:13:56 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:55.423 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.954 Initializing NVMe Controllers 00:17:57.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.954 Controller IO queue size 128, less than required. 00:17:57.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.954 Controller IO queue size 128, less than required. 00:17:57.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:57.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:57.954 Initialization complete. Launching workers. 00:17:57.954 ======================================================== 00:17:57.954 Latency(us) 00:17:57.955 Device Information : IOPS MiB/s Average min max 00:17:57.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 892.77 223.19 146529.84 75774.86 241707.29 00:17:57.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 321.42 80.35 413907.91 144079.00 674371.81 00:17:57.955 ======================================================== 00:17:57.955 Total : 1214.19 303.55 217309.63 75774.86 674371.81 00:17:57.955 00:17:57.955 16:13:59 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:57.955 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.955 No valid NVMe controllers or AIO or URING devices found 00:17:57.955 Initializing NVMe Controllers 00:17:57.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.955 Controller IO queue size 128, less than required. 00:17:57.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.955 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:57.955 Controller IO queue size 128, less than required. 00:17:57.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.955 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:57.955 WARNING: Some requested NVMe devices were skipped 00:17:57.955 16:13:59 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:57.955 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.491 Initializing NVMe Controllers 00:18:00.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.491 Controller IO queue size 128, less than required. 00:18:00.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:00.491 Controller IO queue size 128, less than required. 00:18:00.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:00.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:00.491 Initialization complete. Launching workers. 00:18:00.491 00:18:00.491 ==================== 00:18:00.491 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:00.491 TCP transport: 00:18:00.491 polls: 22212 00:18:00.491 idle_polls: 13387 00:18:00.491 sock_completions: 8825 00:18:00.491 nvme_completions: 4045 00:18:00.491 submitted_requests: 6082 00:18:00.491 queued_requests: 1 00:18:00.491 00:18:00.491 ==================== 00:18:00.491 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:00.491 TCP transport: 00:18:00.491 polls: 25031 00:18:00.491 idle_polls: 10196 00:18:00.491 sock_completions: 14835 00:18:00.491 nvme_completions: 4255 00:18:00.491 submitted_requests: 6302 00:18:00.491 queued_requests: 1 00:18:00.491 ======================================================== 00:18:00.491 Latency(us) 00:18:00.491 Device Information : IOPS MiB/s Average min max 00:18:00.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1010.41 252.60 131361.60 94207.18 194970.75 00:18:00.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1062.88 265.72 123530.13 50689.38 172126.17 00:18:00.491 ======================================================== 00:18:00.491 Total : 2073.29 518.32 127346.77 50689.38 194970.75 00:18:00.491 00:18:00.491 16:14:01 -- host/perf.sh@66 -- # sync 00:18:00.491 16:14:01 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.749 16:14:01 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:00.749 16:14:01 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:00.749 16:14:01 -- host/perf.sh@114 -- # nvmftestfini 00:18:00.749 16:14:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:00.749 16:14:01 -- nvmf/common.sh@117 -- # sync 00:18:00.749 16:14:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.749 16:14:01 -- nvmf/common.sh@120 -- # set +e 00:18:00.749 16:14:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.749 16:14:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.749 rmmod nvme_tcp 00:18:00.749 rmmod nvme_fabrics 00:18:00.749 rmmod nvme_keyring 00:18:00.749 16:14:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.749 16:14:01 -- nvmf/common.sh@124 -- # set -e 00:18:00.749 16:14:01 -- nvmf/common.sh@125 -- # return 0 00:18:00.749 16:14:01 -- nvmf/common.sh@478 -- # '[' -n 3436909 ']' 00:18:00.749 16:14:01 -- nvmf/common.sh@479 -- # killprocess 3436909 00:18:00.749 16:14:01 -- common/autotest_common.sh@936 -- # '[' -z 3436909 ']' 00:18:00.749 16:14:01 -- common/autotest_common.sh@940 -- # kill -0 3436909 00:18:00.749 16:14:01 -- common/autotest_common.sh@941 -- # uname 00:18:00.749 16:14:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.749 16:14:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3436909 00:18:00.749 16:14:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.749 16:14:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.749 16:14:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3436909' 00:18:00.749 killing process with pid 3436909 00:18:00.749 16:14:01 -- common/autotest_common.sh@955 -- # kill 3436909 00:18:00.749 16:14:01 -- common/autotest_common.sh@960 -- # wait 3436909 00:18:02.656 16:14:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:02.656 16:14:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:02.656 16:14:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:02.656 16:14:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.657 16:14:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.657 16:14:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.657 16:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.657 16:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.564 16:14:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:04.564 00:18:04.564 real 0m20.680s 00:18:04.564 user 1m0.972s 00:18:04.564 sys 0m5.052s 00:18:04.564 16:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.564 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:04.564 ************************************ 00:18:04.564 END TEST nvmf_perf 00:18:04.564 ************************************ 00:18:04.564 16:14:05 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:04.564 16:14:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:04.564 16:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.564 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:04.564 ************************************ 00:18:04.564 START TEST nvmf_fio_host 00:18:04.564 ************************************ 00:18:04.564 16:14:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:04.564 * Looking for test storage... 00:18:04.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:04.564 16:14:05 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.564 16:14:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.564 16:14:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.564 16:14:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.564 16:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@5 -- # export PATH 00:18:04.564 16:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.564 16:14:05 -- nvmf/common.sh@7 -- # uname -s 00:18:04.564 16:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.564 16:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.564 16:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.564 16:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.564 16:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.564 16:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.564 16:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.564 16:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.564 16:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.564 16:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.564 16:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:04.564 16:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:04.564 16:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.564 16:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.564 16:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.564 16:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.564 16:14:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.564 16:14:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.564 16:14:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.564 16:14:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.564 16:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- paths/export.sh@5 -- # export PATH 00:18:04.564 16:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.564 16:14:05 -- nvmf/common.sh@47 -- # : 0 00:18:04.564 16:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.564 16:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.564 16:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.564 16:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.564 16:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.564 16:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.564 16:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.564 16:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.564 16:14:05 -- host/fio.sh@12 -- # nvmftestinit 00:18:04.564 16:14:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:04.564 16:14:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.564 16:14:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:04.564 16:14:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:04.564 16:14:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:04.564 16:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.564 16:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.564 16:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.564 16:14:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:04.564 16:14:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:04.564 16:14:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.564 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:07.095 16:14:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:07.095 16:14:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.095 16:14:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.095 16:14:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.095 16:14:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.095 16:14:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.095 16:14:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.095 16:14:07 -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.095 16:14:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.095 16:14:07 -- nvmf/common.sh@296 -- # e810=() 00:18:07.095 16:14:07 -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.095 16:14:07 -- nvmf/common.sh@297 -- # x722=() 00:18:07.095 16:14:07 -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.095 16:14:07 -- nvmf/common.sh@298 -- # mlx=() 00:18:07.095 16:14:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.095 16:14:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.095 16:14:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.095 16:14:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:07.095 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:07.095 16:14:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.095 16:14:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:07.095 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:07.095 16:14:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.095 16:14:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.095 16:14:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.095 16:14:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:07.095 Found net devices under 0000:09:00.0: cvl_0_0 00:18:07.095 16:14:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.095 16:14:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.095 16:14:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.095 16:14:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:07.095 Found net devices under 0000:09:00.1: cvl_0_1 00:18:07.095 16:14:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:07.095 16:14:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:07.095 16:14:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.095 16:14:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.095 16:14:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.095 16:14:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.095 16:14:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.095 16:14:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.095 16:14:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.095 16:14:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.095 16:14:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.095 16:14:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.095 16:14:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.095 16:14:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.095 16:14:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.095 16:14:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.095 16:14:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.095 16:14:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.095 16:14:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.095 16:14:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.095 16:14:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:18:07.095 00:18:07.095 --- 10.0.0.2 ping statistics --- 00:18:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.095 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:07.095 16:14:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:18:07.095 00:18:07.095 --- 10.0.0.1 ping statistics --- 00:18:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.095 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:07.095 16:14:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.095 16:14:07 -- nvmf/common.sh@411 -- # return 0 00:18:07.095 16:14:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:07.095 16:14:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.095 16:14:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:07.095 16:14:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.095 16:14:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:07.095 16:14:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:07.095 16:14:07 -- host/fio.sh@14 -- # [[ y != y ]] 00:18:07.095 16:14:07 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:07.095 16:14:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:07.095 16:14:07 -- common/autotest_common.sh@10 -- # set +x 00:18:07.095 16:14:07 -- host/fio.sh@22 -- # nvmfpid=3440765 00:18:07.095 16:14:07 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.095 16:14:07 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.095 16:14:07 -- host/fio.sh@26 -- # waitforlisten 3440765 00:18:07.095 16:14:07 -- common/autotest_common.sh@817 -- # '[' -z 3440765 ']' 00:18:07.095 16:14:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.095 16:14:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.095 16:14:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.095 16:14:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.095 16:14:07 -- common/autotest_common.sh@10 -- # set +x 00:18:07.095 [2024-04-24 16:14:08.040208] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:07.095 [2024-04-24 16:14:08.040289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.095 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.095 [2024-04-24 16:14:08.108757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.096 [2024-04-24 16:14:08.218309] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.096 [2024-04-24 16:14:08.218365] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.096 [2024-04-24 16:14:08.218393] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.096 [2024-04-24 16:14:08.218405] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.096 [2024-04-24 16:14:08.218423] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.096 [2024-04-24 16:14:08.218486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.096 [2024-04-24 16:14:08.218545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.096 [2024-04-24 16:14:08.218612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.096 [2024-04-24 16:14:08.218615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.031 16:14:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.031 16:14:09 -- common/autotest_common.sh@850 -- # return 0 00:18:08.031 16:14:09 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 [2024-04-24 16:14:09.025567] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:08.031 16:14:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 16:14:09 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 Malloc1 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 [2024-04-24 16:14:09.107194] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:08.031 16:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.031 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.031 16:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.031 16:14:09 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:08.031 16:14:09 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.031 16:14:09 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.031 16:14:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:08.031 16:14:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.031 16:14:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:08.031 16:14:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.031 16:14:09 -- common/autotest_common.sh@1327 -- # shift 00:18:08.031 16:14:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:08.031 16:14:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:08.031 16:14:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:08.031 16:14:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:08.031 16:14:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:08.031 16:14:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:08.031 16:14:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:08.031 16:14:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.290 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:08.290 fio-3.35 00:18:08.290 Starting 1 thread 00:18:08.290 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.816 00:18:10.816 test: (groupid=0, jobs=1): err= 0: pid=3441116: Wed Apr 24 16:14:11 2024 00:18:10.816 read: IOPS=8043, BW=31.4MiB/s (32.9MB/s)(63.1MiB/2007msec) 00:18:10.816 slat (nsec): min=1988, max=146908, avg=2662.10, stdev=1927.34 00:18:10.816 clat (usec): min=2652, max=14239, avg=8805.15, stdev=678.46 00:18:10.816 lat (usec): min=2678, max=14242, avg=8807.81, stdev=678.43 00:18:10.816 clat percentiles (usec): 00:18:10.816 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8225], 00:18:10.816 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:18:10.816 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:18:10.816 | 99.00th=[10290], 99.50th=[10421], 99.90th=[12256], 99.95th=[12518], 00:18:10.816 | 99.99th=[14222] 00:18:10.816 bw ( KiB/s): min=30336, max=33144, per=99.95%, avg=32160.00, stdev=1244.83, samples=4 00:18:10.816 iops : min= 7584, max= 8286, avg=8040.00, stdev=311.21, samples=4 00:18:10.816 write: IOPS=8023, BW=31.3MiB/s (32.9MB/s)(62.9MiB/2007msec); 0 zone resets 00:18:10.816 slat (usec): min=2, max=127, avg= 2.81, stdev= 1.54 00:18:10.816 clat (usec): min=2141, max=12381, avg=7074.31, stdev=592.48 00:18:10.816 lat (usec): min=2150, max=12383, avg=7077.13, stdev=592.52 00:18:10.816 clat percentiles (usec): 00:18:10.816 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:18:10.816 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:18:10.816 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:18:10.816 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[10159], 99.95th=[11469], 00:18:10.816 | 99.99th=[12387] 00:18:10.816 bw ( KiB/s): min=31424, max=32576, per=99.96%, avg=32080.00, stdev=513.00, samples=4 00:18:10.816 iops : min= 7856, max= 8144, avg=8020.00, stdev=128.25, samples=4 00:18:10.816 lat (msec) : 4=0.11%, 10=98.12%, 20=1.77% 00:18:10.816 cpu : usr=57.78%, sys=36.99%, ctx=85, majf=0, minf=37 00:18:10.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:10.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.816 issued rwts: total=16144,16103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.816 00:18:10.816 Run status group 0 (all jobs): 00:18:10.816 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.1MiB (66.1MB), run=2007-2007msec 00:18:10.816 WRITE: bw=31.3MiB/s (32.9MB/s), 31.3MiB/s-31.3MiB/s (32.9MB/s-32.9MB/s), io=62.9MiB (66.0MB), run=2007-2007msec 00:18:10.816 16:14:11 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:10.816 16:14:11 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:10.816 16:14:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:10.816 16:14:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:10.816 16:14:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:10.816 16:14:11 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:10.816 16:14:11 -- common/autotest_common.sh@1327 -- # shift 00:18:10.816 16:14:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:10.816 16:14:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:10.816 16:14:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:10.816 16:14:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:10.816 16:14:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:10.816 16:14:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:10.816 16:14:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:10.816 16:14:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:10.816 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:10.816 fio-3.35 00:18:10.816 Starting 1 thread 00:18:10.816 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.347 00:18:13.347 test: (groupid=0, jobs=1): err= 0: pid=3441449: Wed Apr 24 16:14:14 2024 00:18:13.347 read: IOPS=8244, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:18:13.347 slat (usec): min=2, max=110, avg= 3.74, stdev= 1.74 00:18:13.347 clat (usec): min=2411, max=17050, avg=9240.29, stdev=2215.67 00:18:13.347 lat (usec): min=2414, max=17053, avg=9244.04, stdev=2215.66 00:18:13.347 clat percentiles (usec): 00:18:13.347 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7373], 00:18:13.347 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:18:13.347 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12125], 95.00th=[13173], 00:18:13.347 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16450], 99.95th=[16450], 00:18:13.347 | 99.99th=[16712] 00:18:13.347 bw ( KiB/s): min=59264, max=72064, per=50.67%, avg=66832.00, stdev=5461.20, samples=4 00:18:13.347 iops : min= 3704, max= 4504, avg=4177.00, stdev=341.32, samples=4 00:18:13.347 write: IOPS=4813, BW=75.2MiB/s (78.9MB/s)(137MiB/1823msec); 0 zone resets 00:18:13.347 slat (usec): min=30, max=191, avg=33.97, stdev= 5.44 00:18:13.347 clat (usec): min=4494, max=20330, avg=11236.18, stdev=2150.37 00:18:13.347 lat (usec): min=4525, max=20364, avg=11270.15, stdev=2150.68 00:18:13.347 clat percentiles (usec): 00:18:13.347 | 1.00th=[ 7046], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:18:13.347 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:18:13.347 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14353], 95.00th=[15139], 00:18:13.347 | 99.00th=[16581], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:18:13.347 | 99.99th=[20317] 00:18:13.347 bw ( KiB/s): min=61696, max=75552, per=90.76%, avg=69896.00, stdev=5977.32, samples=4 00:18:13.347 iops : min= 3856, max= 4722, avg=4368.50, stdev=373.58, samples=4 00:18:13.347 lat (msec) : 4=0.11%, 10=55.18%, 20=44.71%, 50=0.01% 00:18:13.347 cpu : usr=72.65%, sys=23.57%, ctx=46, majf=0, minf=63 00:18:13.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:13.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.347 issued rwts: total=16546,8775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.347 00:18:13.347 Run status group 0 (all jobs): 00:18:13.347 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2007-2007msec 00:18:13.347 WRITE: bw=75.2MiB/s (78.9MB/s), 75.2MiB/s-75.2MiB/s (78.9MB/s-78.9MB/s), io=137MiB (144MB), run=1823-1823msec 00:18:13.347 16:14:14 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.347 16:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.347 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:18:13.347 16:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.347 16:14:14 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:13.347 16:14:14 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:13.347 16:14:14 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:13.347 16:14:14 -- host/fio.sh@84 -- # nvmftestfini 00:18:13.347 16:14:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:13.347 16:14:14 -- nvmf/common.sh@117 -- # sync 00:18:13.347 16:14:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.347 16:14:14 -- nvmf/common.sh@120 -- # set +e 00:18:13.347 16:14:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.347 16:14:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.347 rmmod nvme_tcp 00:18:13.347 rmmod nvme_fabrics 00:18:13.347 rmmod nvme_keyring 00:18:13.347 16:14:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.347 16:14:14 -- nvmf/common.sh@124 -- # set -e 00:18:13.347 16:14:14 -- nvmf/common.sh@125 -- # return 0 00:18:13.347 16:14:14 -- nvmf/common.sh@478 -- # '[' -n 3440765 ']' 00:18:13.347 16:14:14 -- nvmf/common.sh@479 -- # killprocess 3440765 00:18:13.347 16:14:14 -- common/autotest_common.sh@936 -- # '[' -z 3440765 ']' 00:18:13.347 16:14:14 -- common/autotest_common.sh@940 -- # kill -0 3440765 00:18:13.347 16:14:14 -- common/autotest_common.sh@941 -- # uname 00:18:13.347 16:14:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.347 16:14:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3440765 00:18:13.347 16:14:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.347 16:14:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.347 16:14:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3440765' 00:18:13.347 killing process with pid 3440765 00:18:13.347 16:14:14 -- common/autotest_common.sh@955 -- # kill 3440765 00:18:13.347 16:14:14 -- common/autotest_common.sh@960 -- # wait 3440765 00:18:13.607 16:14:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:13.607 16:14:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:13.607 16:14:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:13.607 16:14:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.607 16:14:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.607 16:14:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.607 16:14:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.607 16:14:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.516 16:14:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.516 00:18:15.516 real 0m10.999s 00:18:15.516 user 0m29.334s 00:18:15.516 sys 0m3.953s 00:18:15.516 16:14:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:15.516 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:18:15.516 ************************************ 00:18:15.516 END TEST nvmf_fio_host 00:18:15.516 ************************************ 00:18:15.516 16:14:16 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:15.516 16:14:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.516 16:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.516 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:18:15.775 ************************************ 00:18:15.775 START TEST nvmf_failover 00:18:15.775 ************************************ 00:18:15.775 16:14:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:15.775 * Looking for test storage... 00:18:15.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:15.775 16:14:16 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.775 16:14:16 -- nvmf/common.sh@7 -- # uname -s 00:18:15.775 16:14:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.775 16:14:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.775 16:14:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.775 16:14:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.775 16:14:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.775 16:14:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.775 16:14:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.775 16:14:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.775 16:14:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.775 16:14:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.775 16:14:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:15.775 16:14:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:15.775 16:14:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.775 16:14:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.775 16:14:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.775 16:14:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.775 16:14:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.775 16:14:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.775 16:14:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.775 16:14:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.775 16:14:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.775 16:14:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.775 16:14:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.775 16:14:16 -- paths/export.sh@5 -- # export PATH 00:18:15.775 16:14:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.775 16:14:16 -- nvmf/common.sh@47 -- # : 0 00:18:15.775 16:14:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.775 16:14:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.775 16:14:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.775 16:14:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.775 16:14:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.775 16:14:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.775 16:14:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.775 16:14:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.775 16:14:16 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.775 16:14:16 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.775 16:14:16 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.775 16:14:16 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.775 16:14:16 -- host/failover.sh@18 -- # nvmftestinit 00:18:15.775 16:14:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:15.775 16:14:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.775 16:14:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:15.775 16:14:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:15.775 16:14:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:15.775 16:14:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.775 16:14:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.775 16:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.775 16:14:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:15.775 16:14:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:15.775 16:14:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.776 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.677 16:14:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:17.677 16:14:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.677 16:14:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.677 16:14:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.677 16:14:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.677 16:14:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.677 16:14:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.677 16:14:18 -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.677 16:14:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.677 16:14:18 -- nvmf/common.sh@296 -- # e810=() 00:18:17.677 16:14:18 -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.677 16:14:18 -- nvmf/common.sh@297 -- # x722=() 00:18:17.677 16:14:18 -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.677 16:14:18 -- nvmf/common.sh@298 -- # mlx=() 00:18:17.677 16:14:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.677 16:14:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.677 16:14:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.677 16:14:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.677 16:14:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.677 16:14:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.677 16:14:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:17.677 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:17.677 16:14:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.677 16:14:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:17.677 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:17.677 16:14:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.677 16:14:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.677 16:14:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.677 16:14:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.677 16:14:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.677 16:14:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:17.677 Found net devices under 0000:09:00.0: cvl_0_0 00:18:17.677 16:14:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.677 16:14:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.677 16:14:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.677 16:14:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.677 16:14:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.677 16:14:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:17.677 Found net devices under 0000:09:00.1: cvl_0_1 00:18:17.677 16:14:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.677 16:14:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:17.677 16:14:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:17.677 16:14:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:17.677 16:14:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:17.677 16:14:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.677 16:14:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.677 16:14:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.678 16:14:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.678 16:14:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.678 16:14:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.678 16:14:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.678 16:14:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.678 16:14:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.678 16:14:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.678 16:14:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.678 16:14:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.678 16:14:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.678 16:14:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.678 16:14:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.678 16:14:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.678 16:14:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.938 16:14:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.938 16:14:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.938 16:14:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:18:17.938 00:18:17.938 --- 10.0.0.2 ping statistics --- 00:18:17.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.938 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:17.938 16:14:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:18:17.938 00:18:17.938 --- 10.0.0.1 ping statistics --- 00:18:17.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.938 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:17.938 16:14:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.938 16:14:19 -- nvmf/common.sh@411 -- # return 0 00:18:17.938 16:14:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:17.938 16:14:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.938 16:14:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:17.938 16:14:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:17.938 16:14:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.938 16:14:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:17.938 16:14:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:17.938 16:14:19 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:17.938 16:14:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:17.938 16:14:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:17.938 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 16:14:19 -- nvmf/common.sh@470 -- # nvmfpid=3443649 00:18:17.938 16:14:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:17.938 16:14:19 -- nvmf/common.sh@471 -- # waitforlisten 3443649 00:18:17.938 16:14:19 -- common/autotest_common.sh@817 -- # '[' -z 3443649 ']' 00:18:17.938 16:14:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.938 16:14:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.938 16:14:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.938 16:14:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.938 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 [2024-04-24 16:14:19.074962] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:17.938 [2024-04-24 16:14:19.075058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.938 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.938 [2024-04-24 16:14:19.139635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.197 [2024-04-24 16:14:19.245719] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.197 [2024-04-24 16:14:19.245777] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.197 [2024-04-24 16:14:19.245792] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.197 [2024-04-24 16:14:19.245804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.197 [2024-04-24 16:14:19.245814] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.197 [2024-04-24 16:14:19.245907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.197 [2024-04-24 16:14:19.245969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.197 [2024-04-24 16:14:19.245972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.197 16:14:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.197 16:14:19 -- common/autotest_common.sh@850 -- # return 0 00:18:18.197 16:14:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:18.197 16:14:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.197 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:18.197 16:14:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.197 16:14:19 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.455 [2024-04-24 16:14:19.641503] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.455 16:14:19 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.712 Malloc0 00:18:18.712 16:14:19 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.971 16:14:20 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.228 16:14:20 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.486 [2024-04-24 16:14:20.646681] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.486 16:14:20 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:19.744 [2024-04-24 16:14:20.891375] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:19.744 16:14:20 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:20.002 [2024-04-24 16:14:21.132195] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:20.002 16:14:21 -- host/failover.sh@31 -- # bdevperf_pid=3443940 00:18:20.002 16:14:21 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:20.002 16:14:21 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.002 16:14:21 -- host/failover.sh@34 -- # waitforlisten 3443940 /var/tmp/bdevperf.sock 00:18:20.002 16:14:21 -- common/autotest_common.sh@817 -- # '[' -z 3443940 ']' 00:18:20.002 16:14:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.002 16:14:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.002 16:14:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.002 16:14:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.002 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:18:20.261 16:14:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.261 16:14:21 -- common/autotest_common.sh@850 -- # return 0 00:18:20.261 16:14:21 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:20.519 NVMe0n1 00:18:20.519 16:14:21 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.089 00:18:21.089 16:14:22 -- host/failover.sh@39 -- # run_test_pid=3444072 00:18:21.089 16:14:22 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.089 16:14:22 -- host/failover.sh@41 -- # sleep 1 00:18:22.026 16:14:23 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.285 [2024-04-24 16:14:23.321959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 [2024-04-24 16:14:23.322505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfa370 is same with the state(5) to be set 00:18:22.285 16:14:23 -- host/failover.sh@45 -- # sleep 3 00:18:25.595 16:14:26 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:25.595 00:18:25.596 16:14:26 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:25.855 [2024-04-24 16:14:27.017586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.855 [2024-04-24 16:14:27.017997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 [2024-04-24 16:14:27.018196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfab80 is same with the state(5) to be set 00:18:25.856 16:14:27 -- host/failover.sh@50 -- # sleep 3 00:18:29.141 16:14:30 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.141 [2024-04-24 16:14:30.260777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.141 16:14:30 -- host/failover.sh@55 -- # sleep 1 00:18:30.078 16:14:31 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:30.337 [2024-04-24 16:14:31.506623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.337 [2024-04-24 16:14:31.506869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.506999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.338 [2024-04-24 16:14:31.507910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.507991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 [2024-04-24 16:14:31.508072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1190 is same with the state(5) to be set 00:18:30.339 16:14:31 -- host/failover.sh@59 -- # wait 3444072 00:18:37.010 0 00:18:37.010 16:14:37 -- host/failover.sh@61 -- # killprocess 3443940 00:18:37.010 16:14:37 -- common/autotest_common.sh@936 -- # '[' -z 3443940 ']' 00:18:37.010 16:14:37 -- common/autotest_common.sh@940 -- # kill -0 3443940 00:18:37.010 16:14:37 -- common/autotest_common.sh@941 -- # uname 00:18:37.010 16:14:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.010 16:14:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3443940 00:18:37.010 16:14:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:37.010 16:14:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:37.010 16:14:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3443940' 00:18:37.010 killing process with pid 3443940 00:18:37.010 16:14:37 -- common/autotest_common.sh@955 -- # kill 3443940 00:18:37.010 16:14:37 -- common/autotest_common.sh@960 -- # wait 3443940 00:18:37.010 16:14:37 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:37.010 [2024-04-24 16:14:21.189826] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:37.010 [2024-04-24 16:14:21.189913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443940 ] 00:18:37.010 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.010 [2024-04-24 16:14:21.249624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.010 [2024-04-24 16:14:21.352776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.010 Running I/O for 15 seconds... 00:18:37.010 [2024-04-24 16:14:23.322830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.010 [2024-04-24 16:14:23.322871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.010 [2024-04-24 16:14:23.322899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.010 [2024-04-24 16:14:23.322916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.010 [2024-04-24 16:14:23.322932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.010 [2024-04-24 16:14:23.322946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.010 [2024-04-24 16:14:23.322962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.010 [2024-04-24 16:14:23.322976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.010 [2024-04-24 16:14:23.322992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.010 [2024-04-24 16:14:23.323006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.011 [2024-04-24 16:14:23.323105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.011 [2024-04-24 16:14:23.323135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.011 [2024-04-24 16:14:23.323164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.323977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.323991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.011 [2024-04-24 16:14:23.324223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.011 [2024-04-24 16:14:23.324237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.324972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.324986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.012 [2024-04-24 16:14:23.325394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.012 [2024-04-24 16:14:23.325407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.325976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.325990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.013 [2024-04-24 16:14:23.326597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.013 [2024-04-24 16:14:23.326613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:23.326626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:23.326659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:23.326689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:23.326719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157be70 is same with the state(5) to be set 00:18:37.014 [2024-04-24 16:14:23.326759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.014 [2024-04-24 16:14:23.326771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.014 [2024-04-24 16:14:23.326797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75256 len:8 PRP1 0x0 PRP2 0x0 00:18:37.014 [2024-04-24 16:14:23.326811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326871] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157be70 was disconnected and freed. reset controller. 00:18:37.014 [2024-04-24 16:14:23.326890] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:37.014 [2024-04-24 16:14:23.326923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.014 [2024-04-24 16:14:23.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.014 [2024-04-24 16:14:23.326970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.326983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.014 [2024-04-24 16:14:23.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.327009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.014 [2024-04-24 16:14:23.327022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:23.327035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.014 [2024-04-24 16:14:23.330315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:37.014 [2024-04-24 16:14:23.330353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155d3e0 (9): Bad file descriptor 00:18:37.014 [2024-04-24 16:14:23.447687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:37.014 [2024-04-24 16:14:27.019123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.014 [2024-04-24 16:14:27.019950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.014 [2024-04-24 16:14:27.019968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.019983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.019997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.015 [2024-04-24 16:14:27.020653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.015 [2024-04-24 16:14:27.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.015 [2024-04-24 16:14:27.020924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.020938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.020953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.020968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.020983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.020997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.021977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.021991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.022006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.022035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.022049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.022064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.022078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.022093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.016 [2024-04-24 16:14:27.022107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.016 [2024-04-24 16:14:27.022140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99000 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99008 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99016 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99024 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99032 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99040 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99048 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99056 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99072 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99088 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.022955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.022968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.022981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.022992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.017 [2024-04-24 16:14:27.023289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.017 [2024-04-24 16:14:27.023300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:18:37.017 [2024-04-24 16:14:27.023313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.017 [2024-04-24 16:14:27.023326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.018 [2024-04-24 16:14:27.023580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.018 [2024-04-24 16:14:27.023591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:18:37.018 [2024-04-24 16:14:27.023604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023665] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157de10 was disconnected and freed. reset controller. 00:18:37.018 [2024-04-24 16:14:27.023685] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:37.018 [2024-04-24 16:14:27.023718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.018 [2024-04-24 16:14:27.023735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.018 [2024-04-24 16:14:27.023772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.018 [2024-04-24 16:14:27.023815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.018 [2024-04-24 16:14:27.023856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:27.023876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.018 [2024-04-24 16:14:27.023938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155d3e0 (9): Bad file descriptor 00:18:37.018 [2024-04-24 16:14:27.027133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:37.018 [2024-04-24 16:14:27.184849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:37.018 [2024-04-24 16:14:31.509712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.018 [2024-04-24 16:14:31.509774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.509986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.509999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.018 [2024-04-24 16:14:31.510211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.018 [2024-04-24 16:14:31.510461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.018 [2024-04-24 16:14:31.510474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.510975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.510989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.019 [2024-04-24 16:14:31.511440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.019 [2024-04-24 16:14:31.511454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.020 [2024-04-24 16:14:31.511783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.511974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.511988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.020 [2024-04-24 16:14:31.512650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.020 [2024-04-24 16:14:31.512664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.021 [2024-04-24 16:14:31.512693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.021 [2024-04-24 16:14:31.512722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.021 [2024-04-24 16:14:31.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.512803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57896 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.512823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.512854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.512866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57904 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.512878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.512903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57912 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.512927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.512951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.512962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57920 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.512975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.512988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.512999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57928 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57936 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57944 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57952 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57960 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57968 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57976 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57984 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57992 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58000 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58008 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58016 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58024 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58032 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58040 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58048 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57104 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.021 [2024-04-24 16:14:31.513848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.021 [2024-04-24 16:14:31.513859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57112 len:8 PRP1 0x0 PRP2 0x0 00:18:37.021 [2024-04-24 16:14:31.513872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.021 [2024-04-24 16:14:31.513885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.513896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.513908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57120 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.513921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.513934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.513947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.513958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57128 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.513975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.513989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.514001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.514013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57136 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.514026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.514051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.514063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57144 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.514075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.514100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.514111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57152 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.022 [2024-04-24 16:14:31.514148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.022 [2024-04-24 16:14:31.514160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57160 len:8 PRP1 0x0 PRP2 0x0 00:18:37.022 [2024-04-24 16:14:31.514173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514230] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15698d0 was disconnected and freed. reset controller. 00:18:37.022 [2024-04-24 16:14:31.514250] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:37.022 [2024-04-24 16:14:31.514283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.022 [2024-04-24 16:14:31.514301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.022 [2024-04-24 16:14:31.514329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.022 [2024-04-24 16:14:31.514356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.022 [2024-04-24 16:14:31.514383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.022 [2024-04-24 16:14:31.514401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.022 [2024-04-24 16:14:31.517605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:37.022 [2024-04-24 16:14:31.517648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155d3e0 (9): Bad file descriptor 00:18:37.022 [2024-04-24 16:14:31.555888] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:37.022 00:18:37.022 Latency(us) 00:18:37.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.022 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.022 Verification LBA range: start 0x0 length 0x4000 00:18:37.022 NVMe0n1 : 15.01 8487.04 33.15 815.13 0.00 13733.01 782.79 16893.72 00:18:37.022 =================================================================================================================== 00:18:37.022 Total : 8487.04 33.15 815.13 0.00 13733.01 782.79 16893.72 00:18:37.022 Received shutdown signal, test time was about 15.000000 seconds 00:18:37.022 00:18:37.022 Latency(us) 00:18:37.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.022 =================================================================================================================== 00:18:37.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.022 16:14:37 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:37.022 16:14:37 -- host/failover.sh@65 -- # count=3 00:18:37.022 16:14:37 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:37.022 16:14:37 -- host/failover.sh@73 -- # bdevperf_pid=3445804 00:18:37.022 16:14:37 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:37.022 16:14:37 -- host/failover.sh@75 -- # waitforlisten 3445804 /var/tmp/bdevperf.sock 00:18:37.022 16:14:37 -- common/autotest_common.sh@817 -- # '[' -z 3445804 ']' 00:18:37.022 16:14:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.022 16:14:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.022 16:14:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.022 16:14:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.022 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:18:37.022 16:14:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.022 16:14:37 -- common/autotest_common.sh@850 -- # return 0 00:18:37.022 16:14:37 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:37.022 [2024-04-24 16:14:38.109910] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:37.022 16:14:38 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:37.281 [2024-04-24 16:14:38.342591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:37.281 16:14:38 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.538 NVMe0n1 00:18:37.538 16:14:38 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.796 00:18:37.796 16:14:38 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:38.362 00:18:38.362 16:14:39 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.362 16:14:39 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:38.620 16:14:39 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:38.879 16:14:39 -- host/failover.sh@87 -- # sleep 3 00:18:42.168 16:14:42 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.168 16:14:42 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:42.168 16:14:43 -- host/failover.sh@90 -- # run_test_pid=3446480 00:18:42.168 16:14:43 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.168 16:14:43 -- host/failover.sh@92 -- # wait 3446480 00:18:43.104 0 00:18:43.104 16:14:44 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:43.104 [2024-04-24 16:14:37.610914] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:43.104 [2024-04-24 16:14:37.611000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445804 ] 00:18:43.104 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.104 [2024-04-24 16:14:37.679492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.104 [2024-04-24 16:14:37.781501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.104 [2024-04-24 16:14:39.917953] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:43.104 [2024-04-24 16:14:39.918042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.104 [2024-04-24 16:14:39.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.104 [2024-04-24 16:14:39.918081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.104 [2024-04-24 16:14:39.918101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.104 [2024-04-24 16:14:39.918115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.104 [2024-04-24 16:14:39.918129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.104 [2024-04-24 16:14:39.918143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.104 [2024-04-24 16:14:39.918157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.104 [2024-04-24 16:14:39.918171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.104 [2024-04-24 16:14:39.918220] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.104 [2024-04-24 16:14:39.918254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb13e0 (9): Bad file descriptor 00:18:43.104 [2024-04-24 16:14:39.965389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:43.104 Running I/O for 1 seconds... 00:18:43.104 00:18:43.104 Latency(us) 00:18:43.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.104 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:43.104 Verification LBA range: start 0x0 length 0x4000 00:18:43.104 NVMe0n1 : 1.01 7340.26 28.67 0.00 0.00 17371.23 3446.71 15146.10 00:18:43.104 =================================================================================================================== 00:18:43.104 Total : 7340.26 28.67 0.00 0.00 17371.23 3446.71 15146.10 00:18:43.104 16:14:44 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.104 16:14:44 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:43.362 16:14:44 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:43.620 16:14:44 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.620 16:14:44 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:43.878 16:14:45 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:44.136 16:14:45 -- host/failover.sh@101 -- # sleep 3 00:18:47.428 16:14:48 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:47.428 16:14:48 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:47.428 16:14:48 -- host/failover.sh@108 -- # killprocess 3445804 00:18:47.428 16:14:48 -- common/autotest_common.sh@936 -- # '[' -z 3445804 ']' 00:18:47.428 16:14:48 -- common/autotest_common.sh@940 -- # kill -0 3445804 00:18:47.428 16:14:48 -- common/autotest_common.sh@941 -- # uname 00:18:47.428 16:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.428 16:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3445804 00:18:47.428 16:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:47.428 16:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:47.428 16:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3445804' 00:18:47.428 killing process with pid 3445804 00:18:47.428 16:14:48 -- common/autotest_common.sh@955 -- # kill 3445804 00:18:47.428 16:14:48 -- common/autotest_common.sh@960 -- # wait 3445804 00:18:47.687 16:14:48 -- host/failover.sh@110 -- # sync 00:18:47.687 16:14:48 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.946 16:14:49 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:47.946 16:14:49 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:47.946 16:14:49 -- host/failover.sh@116 -- # nvmftestfini 00:18:47.946 16:14:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:47.946 16:14:49 -- nvmf/common.sh@117 -- # sync 00:18:47.946 16:14:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.946 16:14:49 -- nvmf/common.sh@120 -- # set +e 00:18:47.946 16:14:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.946 16:14:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.946 rmmod nvme_tcp 00:18:47.946 rmmod nvme_fabrics 00:18:47.946 rmmod nvme_keyring 00:18:47.946 16:14:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.946 16:14:49 -- nvmf/common.sh@124 -- # set -e 00:18:47.946 16:14:49 -- nvmf/common.sh@125 -- # return 0 00:18:47.946 16:14:49 -- nvmf/common.sh@478 -- # '[' -n 3443649 ']' 00:18:47.946 16:14:49 -- nvmf/common.sh@479 -- # killprocess 3443649 00:18:47.946 16:14:49 -- common/autotest_common.sh@936 -- # '[' -z 3443649 ']' 00:18:47.946 16:14:49 -- common/autotest_common.sh@940 -- # kill -0 3443649 00:18:47.946 16:14:49 -- common/autotest_common.sh@941 -- # uname 00:18:47.946 16:14:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.946 16:14:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3443649 00:18:47.946 16:14:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:47.946 16:14:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:47.946 16:14:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3443649' 00:18:47.946 killing process with pid 3443649 00:18:47.946 16:14:49 -- common/autotest_common.sh@955 -- # kill 3443649 00:18:47.946 16:14:49 -- common/autotest_common.sh@960 -- # wait 3443649 00:18:48.513 16:14:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.513 16:14:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:48.513 16:14:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:48.513 16:14:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.513 16:14:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.513 16:14:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.513 16:14:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.513 16:14:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.416 16:14:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.416 00:18:50.416 real 0m34.725s 00:18:50.416 user 2m1.946s 00:18:50.417 sys 0m5.720s 00:18:50.417 16:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:50.417 16:14:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.417 ************************************ 00:18:50.417 END TEST nvmf_failover 00:18:50.417 ************************************ 00:18:50.417 16:14:51 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:50.417 16:14:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:50.417 16:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.417 16:14:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.417 ************************************ 00:18:50.417 START TEST nvmf_discovery 00:18:50.417 ************************************ 00:18:50.417 16:14:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:50.678 * Looking for test storage... 00:18:50.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:50.678 16:14:51 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.678 16:14:51 -- nvmf/common.sh@7 -- # uname -s 00:18:50.678 16:14:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.678 16:14:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.678 16:14:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.678 16:14:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.678 16:14:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.678 16:14:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.678 16:14:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.678 16:14:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.678 16:14:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.678 16:14:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.678 16:14:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.678 16:14:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.678 16:14:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.678 16:14:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.678 16:14:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.678 16:14:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.678 16:14:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.678 16:14:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.678 16:14:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.678 16:14:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.678 16:14:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.678 16:14:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.678 16:14:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.678 16:14:51 -- paths/export.sh@5 -- # export PATH 00:18:50.678 16:14:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.678 16:14:51 -- nvmf/common.sh@47 -- # : 0 00:18:50.678 16:14:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.678 16:14:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.678 16:14:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.678 16:14:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.678 16:14:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.678 16:14:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.678 16:14:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.678 16:14:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.678 16:14:51 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:50.678 16:14:51 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:50.679 16:14:51 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:50.679 16:14:51 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:50.679 16:14:51 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:50.679 16:14:51 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:50.679 16:14:51 -- host/discovery.sh@25 -- # nvmftestinit 00:18:50.679 16:14:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:50.679 16:14:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.679 16:14:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:50.679 16:14:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:50.679 16:14:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:50.679 16:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.679 16:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.679 16:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.679 16:14:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:50.679 16:14:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:50.679 16:14:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.679 16:14:51 -- common/autotest_common.sh@10 -- # set +x 00:18:52.587 16:14:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:52.587 16:14:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.587 16:14:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.587 16:14:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.587 16:14:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.587 16:14:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.587 16:14:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.587 16:14:53 -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.587 16:14:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.587 16:14:53 -- nvmf/common.sh@296 -- # e810=() 00:18:52.587 16:14:53 -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.587 16:14:53 -- nvmf/common.sh@297 -- # x722=() 00:18:52.587 16:14:53 -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.587 16:14:53 -- nvmf/common.sh@298 -- # mlx=() 00:18:52.587 16:14:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.587 16:14:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.587 16:14:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.587 16:14:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:52.587 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:52.587 16:14:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.587 16:14:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:52.587 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:52.587 16:14:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.587 16:14:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.587 16:14:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.587 16:14:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:52.587 Found net devices under 0000:09:00.0: cvl_0_0 00:18:52.587 16:14:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.587 16:14:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.587 16:14:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.587 16:14:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:52.587 Found net devices under 0000:09:00.1: cvl_0_1 00:18:52.587 16:14:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:52.587 16:14:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:52.587 16:14:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.587 16:14:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.587 16:14:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.587 16:14:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.587 16:14:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.587 16:14:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.587 16:14:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.587 16:14:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.587 16:14:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.587 16:14:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.587 16:14:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.587 16:14:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.587 16:14:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.587 16:14:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.587 16:14:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.587 16:14:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.587 16:14:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.587 16:14:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.587 16:14:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:52.587 00:18:52.587 --- 10.0.0.2 ping statistics --- 00:18:52.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.587 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:52.587 16:14:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:18:52.587 00:18:52.587 --- 10.0.0.1 ping statistics --- 00:18:52.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.587 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:52.587 16:14:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.587 16:14:53 -- nvmf/common.sh@411 -- # return 0 00:18:52.587 16:14:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:52.587 16:14:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.587 16:14:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:52.587 16:14:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.587 16:14:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:52.587 16:14:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:52.587 16:14:53 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:52.587 16:14:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:52.587 16:14:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.587 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:18:52.587 16:14:53 -- nvmf/common.sh@470 -- # nvmfpid=3449193 00:18:52.587 16:14:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.587 16:14:53 -- nvmf/common.sh@471 -- # waitforlisten 3449193 00:18:52.587 16:14:53 -- common/autotest_common.sh@817 -- # '[' -z 3449193 ']' 00:18:52.587 16:14:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.587 16:14:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.587 16:14:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.587 16:14:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.587 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:18:52.587 [2024-04-24 16:14:53.792795] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:52.587 [2024-04-24 16:14:53.792890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.587 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.587 [2024-04-24 16:14:53.857438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.849 [2024-04-24 16:14:53.962001] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.849 [2024-04-24 16:14:53.962080] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.849 [2024-04-24 16:14:53.962094] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.849 [2024-04-24 16:14:53.962106] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.849 [2024-04-24 16:14:53.962116] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.849 [2024-04-24 16:14:53.962149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.849 16:14:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:52.849 16:14:54 -- common/autotest_common.sh@850 -- # return 0 00:18:52.849 16:14:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:52.849 16:14:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:52.849 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:52.849 16:14:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.849 16:14:54 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.849 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.849 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:52.849 [2024-04-24 16:14:54.107890] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.849 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.849 16:14:54 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:52.849 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.849 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:52.849 [2024-04-24 16:14:54.116103] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:52.849 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.849 16:14:54 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:52.849 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.849 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:52.849 null0 00:18:52.849 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.849 16:14:54 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:52.849 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.849 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.111 null1 00:18:53.111 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.111 16:14:54 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:53.111 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.111 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.111 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.111 16:14:54 -- host/discovery.sh@45 -- # hostpid=3449220 00:18:53.111 16:14:54 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:53.111 16:14:54 -- host/discovery.sh@46 -- # waitforlisten 3449220 /tmp/host.sock 00:18:53.111 16:14:54 -- common/autotest_common.sh@817 -- # '[' -z 3449220 ']' 00:18:53.111 16:14:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:53.111 16:14:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:53.111 16:14:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:53.111 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:53.111 16:14:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:53.111 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.111 [2024-04-24 16:14:54.187044] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:18:53.111 [2024-04-24 16:14:54.187123] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449220 ] 00:18:53.111 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.111 [2024-04-24 16:14:54.247947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.111 [2024-04-24 16:14:54.355858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.370 16:14:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.370 16:14:54 -- common/autotest_common.sh@850 -- # return 0 00:18:53.370 16:14:54 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.370 16:14:54 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@72 -- # notify_id=0 00:18:53.370 16:14:54 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # sort 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # xargs 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:53.370 16:14:54 -- host/discovery.sh@84 -- # get_bdev_list 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # sort 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # xargs 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:53.370 16:14:54 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # sort 00:18:53.370 16:14:54 -- host/discovery.sh@59 -- # xargs 00:18:53.370 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.370 16:14:54 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:53.370 16:14:54 -- host/discovery.sh@88 -- # get_bdev_list 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.370 16:14:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.370 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.370 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.371 16:14:54 -- host/discovery.sh@55 -- # sort 00:18:53.371 16:14:54 -- host/discovery.sh@55 -- # xargs 00:18:53.371 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.629 16:14:54 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:53.629 16:14:54 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:53.629 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.629 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.629 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.629 16:14:54 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.629 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.629 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # sort 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # xargs 00:18:53.629 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.629 16:14:54 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:53.629 16:14:54 -- host/discovery.sh@92 -- # get_bdev_list 00:18:53.629 16:14:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.629 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.629 16:14:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.629 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.629 16:14:54 -- host/discovery.sh@55 -- # sort 00:18:53.629 16:14:54 -- host/discovery.sh@55 -- # xargs 00:18:53.629 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.629 16:14:54 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:53.629 16:14:54 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:53.629 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.629 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.629 [2024-04-24 16:14:54.761779] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.629 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.629 16:14:54 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.629 16:14:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.630 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.630 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # sort 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # xargs 00:18:53.630 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.630 16:14:54 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:53.630 16:14:54 -- host/discovery.sh@98 -- # get_bdev_list 00:18:53.630 16:14:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.630 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.630 16:14:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.630 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 16:14:54 -- host/discovery.sh@55 -- # sort 00:18:53.630 16:14:54 -- host/discovery.sh@55 -- # xargs 00:18:53.630 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.630 16:14:54 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:53.630 16:14:54 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:53.630 16:14:54 -- host/discovery.sh@79 -- # expected_count=0 00:18:53.630 16:14:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.630 16:14:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.630 16:14:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:53.630 16:14:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:53.630 16:14:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.630 16:14:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:53.630 16:14:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:53.630 16:14:54 -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.630 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.630 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.630 16:14:54 -- host/discovery.sh@74 -- # notification_count=0 00:18:53.630 16:14:54 -- host/discovery.sh@75 -- # notify_id=0 00:18:53.630 16:14:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:53.630 16:14:54 -- common/autotest_common.sh@904 -- # return 0 00:18:53.630 16:14:54 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:53.630 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.630 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.630 16:14:54 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:53.630 16:14:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:53.630 16:14:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:53.630 16:14:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:53.630 16:14:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:53.630 16:14:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.630 16:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.630 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # sort 00:18:53.630 16:14:54 -- host/discovery.sh@59 -- # xargs 00:18:53.630 16:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.890 16:14:54 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:53.890 16:14:54 -- common/autotest_common.sh@906 -- # sleep 1 00:18:54.459 [2024-04-24 16:14:55.496094] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:54.459 [2024-04-24 16:14:55.496126] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:54.459 [2024-04-24 16:14:55.496152] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:54.459 [2024-04-24 16:14:55.582432] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:54.717 [2024-04-24 16:14:55.806825] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:54.717 [2024-04-24 16:14:55.806849] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:54.717 16:14:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.717 16:14:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:54.717 16:14:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:54.717 16:14:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:54.717 16:14:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:54.717 16:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.717 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:18:54.717 16:14:55 -- host/discovery.sh@59 -- # sort 00:18:54.717 16:14:55 -- host/discovery.sh@59 -- # xargs 00:18:54.717 16:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.717 16:14:55 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.717 16:14:55 -- common/autotest_common.sh@904 -- # return 0 00:18:54.717 16:14:55 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:54.717 16:14:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:54.717 16:14:55 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.717 16:14:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.717 16:14:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:54.717 16:14:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:54.717 16:14:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.717 16:14:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.717 16:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.717 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:18:54.717 16:14:55 -- host/discovery.sh@55 -- # sort 00:18:54.717 16:14:55 -- host/discovery.sh@55 -- # xargs 00:18:54.717 16:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.975 16:14:56 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.975 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:54.975 16:14:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:54.975 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.975 16:14:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:54.975 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.975 16:14:56 -- host/discovery.sh@63 -- # sort -n 00:18:54.975 16:14:56 -- host/discovery.sh@63 -- # xargs 00:18:54.975 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.975 16:14:56 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:54.975 16:14:56 -- host/discovery.sh@79 -- # expected_count=1 00:18:54.975 16:14:56 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:54.975 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:54.975 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.975 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:54.975 16:14:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:54.975 16:14:56 -- host/discovery.sh@74 -- # jq '. | length' 00:18:54.975 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.975 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.975 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.975 16:14:56 -- host/discovery.sh@74 -- # notification_count=1 00:18:54.975 16:14:56 -- host/discovery.sh@75 -- # notify_id=1 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:54.975 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.975 16:14:56 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:54.975 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.975 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.975 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.975 16:14:56 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.975 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:54.975 16:14:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.975 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.975 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.975 16:14:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.975 16:14:56 -- host/discovery.sh@55 -- # sort 00:18:54.975 16:14:56 -- host/discovery.sh@55 -- # xargs 00:18:54.975 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:54.975 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.975 16:14:56 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:54.975 16:14:56 -- host/discovery.sh@79 -- # expected_count=1 00:18:54.975 16:14:56 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:54.976 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:54.976 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.976 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:54.976 16:14:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:54.976 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.976 16:14:56 -- host/discovery.sh@74 -- # jq '. | length' 00:18:54.976 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.976 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.976 16:14:56 -- host/discovery.sh@74 -- # notification_count=1 00:18:54.976 16:14:56 -- host/discovery.sh@75 -- # notify_id=2 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:54.976 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.976 16:14:56 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:54.976 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.976 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.976 [2024-04-24 16:14:56.186199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:54.976 [2024-04-24 16:14:56.186511] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:54.976 [2024-04-24 16:14:56.186548] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:54.976 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.976 16:14:56 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.976 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:54.976 16:14:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:54.976 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.976 16:14:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:54.976 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.976 16:14:56 -- host/discovery.sh@59 -- # sort 00:18:54.976 16:14:56 -- host/discovery.sh@59 -- # xargs 00:18:54.976 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.976 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:54.976 16:14:56 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:54.976 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:54.976 16:14:56 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:54.976 16:14:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.976 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.976 16:14:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.976 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:54.976 16:14:56 -- host/discovery.sh@55 -- # sort 00:18:54.976 16:14:56 -- host/discovery.sh@55 -- # xargs 00:18:54.976 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.234 16:14:56 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:55.234 16:14:56 -- common/autotest_common.sh@904 -- # return 0 00:18:55.234 16:14:56 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:55.234 16:14:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:55.234 16:14:56 -- common/autotest_common.sh@901 -- # local max=10 00:18:55.234 16:14:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:55.234 16:14:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:55.234 16:14:56 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:55.234 16:14:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:55.234 16:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.234 16:14:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:55.234 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:55.234 16:14:56 -- host/discovery.sh@63 -- # sort -n 00:18:55.234 16:14:56 -- host/discovery.sh@63 -- # xargs 00:18:55.234 [2024-04-24 16:14:56.273851] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:55.234 16:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.234 16:14:56 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:55.234 16:14:56 -- common/autotest_common.sh@906 -- # sleep 1 00:18:55.493 [2024-04-24 16:14:56.541142] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:55.493 [2024-04-24 16:14:56.541168] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:55.493 [2024-04-24 16:14:56.541179] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:56.061 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.061 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:56.061 16:14:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:56.061 16:14:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:56.061 16:14:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:56.061 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.061 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.061 16:14:57 -- host/discovery.sh@63 -- # sort -n 00:18:56.061 16:14:57 -- host/discovery.sh@63 -- # xargs 00:18:56.061 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:56.320 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.320 16:14:57 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:56.320 16:14:57 -- host/discovery.sh@79 -- # expected_count=0 00:18:56.320 16:14:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:56.320 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:56.320 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.320 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:56.320 16:14:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:56.320 16:14:57 -- host/discovery.sh@74 -- # jq '. | length' 00:18:56.320 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.320 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.320 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.320 16:14:57 -- host/discovery.sh@74 -- # notification_count=0 00:18:56.320 16:14:57 -- host/discovery.sh@75 -- # notify_id=2 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:56.320 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.320 16:14:57 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.320 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.320 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.320 [2024-04-24 16:14:57.406076] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:56.320 [2024-04-24 16:14:57.406111] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:56.320 [2024-04-24 16:14:57.407714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.320 [2024-04-24 16:14:57.407751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.320 [2024-04-24 16:14:57.407773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.320 [2024-04-24 16:14:57.407814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.320 [2024-04-24 16:14:57.407829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.320 [2024-04-24 16:14:57.407844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.320 [2024-04-24 16:14:57.407859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.320 [2024-04-24 16:14:57.407873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.320 [2024-04-24 16:14:57.407887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.320 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.320 16:14:57 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:56.320 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:56.320 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.320 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:56.320 16:14:57 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:56.320 16:14:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:56.320 16:14:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:56.320 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.320 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.320 16:14:57 -- host/discovery.sh@59 -- # sort 00:18:56.320 16:14:57 -- host/discovery.sh@59 -- # xargs 00:18:56.320 [2024-04-24 16:14:57.417721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.320 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.320 [2024-04-24 16:14:57.427769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.320 [2024-04-24 16:14:57.428048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.320 [2024-04-24 16:14:57.428220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.320 [2024-04-24 16:14:57.428249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.320 [2024-04-24 16:14:57.428267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.320 [2024-04-24 16:14:57.428292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.320 [2024-04-24 16:14:57.428331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.320 [2024-04-24 16:14:57.428352] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.320 [2024-04-24 16:14:57.428368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.320 [2024-04-24 16:14:57.428405] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.320 [2024-04-24 16:14:57.437858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.320 [2024-04-24 16:14:57.438090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.438270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.438299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.438317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.438342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.438378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.438397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.438413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.438434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 [2024-04-24 16:14:57.447941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.321 [2024-04-24 16:14:57.448153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.448348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.448373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.448389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.448411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.448492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.448530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.448544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.448579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.321 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.321 16:14:57 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.321 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:56.321 16:14:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.321 16:14:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:56.321 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.321 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.321 16:14:57 -- host/discovery.sh@55 -- # sort 00:18:56.321 16:14:57 -- host/discovery.sh@55 -- # xargs 00:18:56.321 [2024-04-24 16:14:57.458013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.321 [2024-04-24 16:14:57.458240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.458416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.458445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.458464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.458489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.458525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.458544] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.458560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.458581] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 [2024-04-24 16:14:57.468107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.321 [2024-04-24 16:14:57.468319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.468458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.468484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.468500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.468522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.468569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.468588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.468602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.468621] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 [2024-04-24 16:14:57.478191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.321 [2024-04-24 16:14:57.478447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.478629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.478655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.478671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.478693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.478725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.478752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.478767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.478787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.321 [2024-04-24 16:14:57.488276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:56.321 [2024-04-24 16:14:57.488497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.488648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.321 [2024-04-24 16:14:57.488678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2ba20 with addr=10.0.0.2, port=4420 00:18:56.321 [2024-04-24 16:14:57.488696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba20 is same with the state(5) to be set 00:18:56.321 [2024-04-24 16:14:57.488721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba20 (9): Bad file descriptor 00:18:56.321 [2024-04-24 16:14:57.488776] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:56.321 [2024-04-24 16:14:57.488816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:56.321 [2024-04-24 16:14:57.488832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:56.321 [2024-04-24 16:14:57.488879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.321 [2024-04-24 16:14:57.493821] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:56.321 [2024-04-24 16:14:57.493867] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:56.321 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.321 16:14:57 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.321 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:56.321 16:14:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:56.321 16:14:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:56.321 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.321 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.321 16:14:57 -- host/discovery.sh@63 -- # sort -n 00:18:56.321 16:14:57 -- host/discovery.sh@63 -- # xargs 00:18:56.321 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:56.321 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.321 16:14:57 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:56.321 16:14:57 -- host/discovery.sh@79 -- # expected_count=0 00:18:56.321 16:14:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:56.321 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:56.321 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.321 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:56.321 16:14:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:56.321 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.321 16:14:57 -- host/discovery.sh@74 -- # jq '. | length' 00:18:56.321 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.321 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.321 16:14:57 -- host/discovery.sh@74 -- # notification_count=0 00:18:56.321 16:14:57 -- host/discovery.sh@75 -- # notify_id=2 00:18:56.321 16:14:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:56.321 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.321 16:14:57 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:56.321 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.322 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.322 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.322 16:14:57 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:56.322 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:56.322 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.322 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.322 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:56.322 16:14:57 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:56.322 16:14:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:56.322 16:14:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:56.322 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.322 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.322 16:14:57 -- host/discovery.sh@59 -- # sort 00:18:56.322 16:14:57 -- host/discovery.sh@59 -- # xargs 00:18:56.582 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:56.582 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.582 16:14:57 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:56.582 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:56.582 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.582 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:56.582 16:14:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.582 16:14:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:56.582 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.582 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.582 16:14:57 -- host/discovery.sh@55 -- # sort 00:18:56.582 16:14:57 -- host/discovery.sh@55 -- # xargs 00:18:56.582 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:56.582 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.582 16:14:57 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:56.582 16:14:57 -- host/discovery.sh@79 -- # expected_count=2 00:18:56.582 16:14:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:56.582 16:14:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:56.582 16:14:57 -- common/autotest_common.sh@901 -- # local max=10 00:18:56.582 16:14:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:56.582 16:14:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:56.582 16:14:57 -- host/discovery.sh@74 -- # jq '. | length' 00:18:56.582 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.582 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:56.582 16:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.582 16:14:57 -- host/discovery.sh@74 -- # notification_count=2 00:18:56.582 16:14:57 -- host/discovery.sh@75 -- # notify_id=4 00:18:56.582 16:14:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:56.582 16:14:57 -- common/autotest_common.sh@904 -- # return 0 00:18:56.582 16:14:57 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:56.582 16:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.582 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:57.522 [2024-04-24 16:14:58.784655] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:57.522 [2024-04-24 16:14:58.784695] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:57.522 [2024-04-24 16:14:58.784721] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:57.780 [2024-04-24 16:14:58.914185] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:57.780 [2024-04-24 16:14:58.978649] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:57.780 [2024-04-24 16:14:58.978701] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:57.780 16:14:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.780 16:14:58 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:57.780 16:14:58 -- common/autotest_common.sh@638 -- # local es=0 00:18:57.780 16:14:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:57.780 16:14:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:57.780 16:14:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:57.780 16:14:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:57.780 16:14:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:57.780 16:14:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:57.780 16:14:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.780 16:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.780 request: 00:18:57.780 { 00:18:57.780 "name": "nvme", 00:18:57.780 "trtype": "tcp", 00:18:57.780 "traddr": "10.0.0.2", 00:18:57.780 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:57.780 "adrfam": "ipv4", 00:18:57.780 "trsvcid": "8009", 00:18:57.780 "wait_for_attach": true, 00:18:57.780 "method": "bdev_nvme_start_discovery", 00:18:57.780 "req_id": 1 00:18:57.780 } 00:18:57.780 Got JSON-RPC error response 00:18:57.780 response: 00:18:57.780 { 00:18:57.780 "code": -17, 00:18:57.780 "message": "File exists" 00:18:57.780 } 00:18:57.780 16:14:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:57.780 16:14:58 -- common/autotest_common.sh@641 -- # es=1 00:18:57.780 16:14:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:57.780 16:14:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:57.780 16:14:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:57.780 16:14:58 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:57.780 16:14:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:57.780 16:14:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:57.780 16:14:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.780 16:14:58 -- host/discovery.sh@67 -- # sort 00:18:57.780 16:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.780 16:14:58 -- host/discovery.sh@67 -- # xargs 00:18:57.780 16:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.780 16:14:59 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:57.780 16:14:59 -- host/discovery.sh@146 -- # get_bdev_list 00:18:57.780 16:14:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:57.780 16:14:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:57.780 16:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.780 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:57.780 16:14:59 -- host/discovery.sh@55 -- # sort 00:18:57.780 16:14:59 -- host/discovery.sh@55 -- # xargs 00:18:57.780 16:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.038 16:14:59 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:58.038 16:14:59 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:58.038 16:14:59 -- common/autotest_common.sh@638 -- # local es=0 00:18:58.038 16:14:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:58.038 16:14:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:58.038 16:14:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.038 16:14:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:58.038 16:14:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.038 16:14:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:58.038 16:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.038 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.038 request: 00:18:58.038 { 00:18:58.038 "name": "nvme_second", 00:18:58.038 "trtype": "tcp", 00:18:58.038 "traddr": "10.0.0.2", 00:18:58.038 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:58.038 "adrfam": "ipv4", 00:18:58.038 "trsvcid": "8009", 00:18:58.039 "wait_for_attach": true, 00:18:58.039 "method": "bdev_nvme_start_discovery", 00:18:58.039 "req_id": 1 00:18:58.039 } 00:18:58.039 Got JSON-RPC error response 00:18:58.039 response: 00:18:58.039 { 00:18:58.039 "code": -17, 00:18:58.039 "message": "File exists" 00:18:58.039 } 00:18:58.039 16:14:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:58.039 16:14:59 -- common/autotest_common.sh@641 -- # es=1 00:18:58.039 16:14:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:58.039 16:14:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:58.039 16:14:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:58.039 16:14:59 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:58.039 16:14:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:58.039 16:14:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:58.039 16:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.039 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 16:14:59 -- host/discovery.sh@67 -- # sort 00:18:58.039 16:14:59 -- host/discovery.sh@67 -- # xargs 00:18:58.039 16:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.039 16:14:59 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:58.039 16:14:59 -- host/discovery.sh@152 -- # get_bdev_list 00:18:58.039 16:14:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:58.039 16:14:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:58.039 16:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.039 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 16:14:59 -- host/discovery.sh@55 -- # sort 00:18:58.039 16:14:59 -- host/discovery.sh@55 -- # xargs 00:18:58.039 16:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.039 16:14:59 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:58.039 16:14:59 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:58.039 16:14:59 -- common/autotest_common.sh@638 -- # local es=0 00:18:58.039 16:14:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:58.039 16:14:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:58.039 16:14:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.039 16:14:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:58.039 16:14:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.039 16:14:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:58.039 16:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.039 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.978 [2024-04-24 16:15:00.190430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.978 [2024-04-24 16:15:00.190626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.978 [2024-04-24 16:15:00.190653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27c40 with addr=10.0.0.2, port=8010 00:18:58.978 [2024-04-24 16:15:00.190683] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:58.978 [2024-04-24 16:15:00.190698] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:58.978 [2024-04-24 16:15:00.190711] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:59.912 [2024-04-24 16:15:01.192827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.912 [2024-04-24 16:15:01.193040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.912 [2024-04-24 16:15:01.193066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27c40 with addr=10.0.0.2, port=8010 00:18:59.912 [2024-04-24 16:15:01.193091] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:59.912 [2024-04-24 16:15:01.193106] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:59.912 [2024-04-24 16:15:01.193119] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:01.287 [2024-04-24 16:15:02.194970] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:01.288 request: 00:19:01.288 { 00:19:01.288 "name": "nvme_second", 00:19:01.288 "trtype": "tcp", 00:19:01.288 "traddr": "10.0.0.2", 00:19:01.288 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:01.288 "adrfam": "ipv4", 00:19:01.288 "trsvcid": "8010", 00:19:01.288 "attach_timeout_ms": 3000, 00:19:01.288 "method": "bdev_nvme_start_discovery", 00:19:01.288 "req_id": 1 00:19:01.288 } 00:19:01.288 Got JSON-RPC error response 00:19:01.288 response: 00:19:01.288 { 00:19:01.288 "code": -110, 00:19:01.288 "message": "Connection timed out" 00:19:01.288 } 00:19:01.288 16:15:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:01.288 16:15:02 -- common/autotest_common.sh@641 -- # es=1 00:19:01.288 16:15:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:01.288 16:15:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:01.288 16:15:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:01.288 16:15:02 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:01.288 16:15:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:01.288 16:15:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:01.288 16:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.288 16:15:02 -- common/autotest_common.sh@10 -- # set +x 00:19:01.288 16:15:02 -- host/discovery.sh@67 -- # sort 00:19:01.288 16:15:02 -- host/discovery.sh@67 -- # xargs 00:19:01.288 16:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.288 16:15:02 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:01.288 16:15:02 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:01.288 16:15:02 -- host/discovery.sh@161 -- # kill 3449220 00:19:01.288 16:15:02 -- host/discovery.sh@162 -- # nvmftestfini 00:19:01.288 16:15:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:01.288 16:15:02 -- nvmf/common.sh@117 -- # sync 00:19:01.288 16:15:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.288 16:15:02 -- nvmf/common.sh@120 -- # set +e 00:19:01.288 16:15:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.288 16:15:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.288 rmmod nvme_tcp 00:19:01.288 rmmod nvme_fabrics 00:19:01.288 rmmod nvme_keyring 00:19:01.288 16:15:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.288 16:15:02 -- nvmf/common.sh@124 -- # set -e 00:19:01.288 16:15:02 -- nvmf/common.sh@125 -- # return 0 00:19:01.288 16:15:02 -- nvmf/common.sh@478 -- # '[' -n 3449193 ']' 00:19:01.288 16:15:02 -- nvmf/common.sh@479 -- # killprocess 3449193 00:19:01.288 16:15:02 -- common/autotest_common.sh@936 -- # '[' -z 3449193 ']' 00:19:01.288 16:15:02 -- common/autotest_common.sh@940 -- # kill -0 3449193 00:19:01.288 16:15:02 -- common/autotest_common.sh@941 -- # uname 00:19:01.288 16:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.288 16:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3449193 00:19:01.288 16:15:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:01.288 16:15:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:01.288 16:15:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3449193' 00:19:01.288 killing process with pid 3449193 00:19:01.288 16:15:02 -- common/autotest_common.sh@955 -- # kill 3449193 00:19:01.288 16:15:02 -- common/autotest_common.sh@960 -- # wait 3449193 00:19:01.546 16:15:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:01.546 16:15:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:01.546 16:15:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:01.546 16:15:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.546 16:15:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.546 16:15:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.546 16:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.546 16:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.491 16:15:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.491 00:19:03.491 real 0m12.981s 00:19:03.491 user 0m18.863s 00:19:03.491 sys 0m2.652s 00:19:03.491 16:15:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.491 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:19:03.491 ************************************ 00:19:03.491 END TEST nvmf_discovery 00:19:03.491 ************************************ 00:19:03.491 16:15:04 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:03.491 16:15:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:03.491 16:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.491 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:19:03.491 ************************************ 00:19:03.491 START TEST nvmf_discovery_remove_ifc 00:19:03.491 ************************************ 00:19:03.491 16:15:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:03.749 * Looking for test storage... 00:19:03.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.749 16:15:04 -- nvmf/common.sh@7 -- # uname -s 00:19:03.749 16:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.749 16:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.749 16:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.749 16:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.749 16:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.749 16:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.749 16:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.749 16:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.749 16:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.749 16:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.749 16:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:03.749 16:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:03.749 16:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.749 16:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.749 16:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.749 16:15:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.749 16:15:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.749 16:15:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.749 16:15:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.749 16:15:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.749 16:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.749 16:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.749 16:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.749 16:15:04 -- paths/export.sh@5 -- # export PATH 00:19:03.749 16:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.749 16:15:04 -- nvmf/common.sh@47 -- # : 0 00:19:03.749 16:15:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.749 16:15:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.749 16:15:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.749 16:15:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.749 16:15:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.749 16:15:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.749 16:15:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.749 16:15:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:03.749 16:15:04 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:03.749 16:15:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:03.749 16:15:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.749 16:15:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:03.749 16:15:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:03.749 16:15:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:03.749 16:15:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.749 16:15:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.749 16:15:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.749 16:15:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:03.749 16:15:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:03.749 16:15:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.749 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:19:05.653 16:15:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:05.653 16:15:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.653 16:15:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.653 16:15:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.653 16:15:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.653 16:15:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.653 16:15:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.653 16:15:06 -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.653 16:15:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.653 16:15:06 -- nvmf/common.sh@296 -- # e810=() 00:19:05.653 16:15:06 -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.653 16:15:06 -- nvmf/common.sh@297 -- # x722=() 00:19:05.653 16:15:06 -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.653 16:15:06 -- nvmf/common.sh@298 -- # mlx=() 00:19:05.653 16:15:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.653 16:15:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.653 16:15:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.653 16:15:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.653 16:15:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.653 16:15:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.653 16:15:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.653 16:15:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.653 16:15:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.653 16:15:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:05.653 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:05.653 16:15:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.653 16:15:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.654 16:15:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:05.654 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:05.654 16:15:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.654 16:15:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.654 16:15:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.654 16:15:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.654 16:15:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.654 16:15:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:05.654 Found net devices under 0000:09:00.0: cvl_0_0 00:19:05.654 16:15:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.654 16:15:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.654 16:15:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.654 16:15:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.654 16:15:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.654 16:15:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:05.654 Found net devices under 0000:09:00.1: cvl_0_1 00:19:05.654 16:15:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.654 16:15:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:05.654 16:15:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:05.654 16:15:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:05.654 16:15:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.654 16:15:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.654 16:15:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.654 16:15:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.654 16:15:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.654 16:15:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.654 16:15:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.654 16:15:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.654 16:15:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.654 16:15:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.654 16:15:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.654 16:15:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.654 16:15:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.654 16:15:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.654 16:15:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.654 16:15:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.654 16:15:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.654 16:15:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.654 16:15:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.654 16:15:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:19:05.654 00:19:05.654 --- 10.0.0.2 ping statistics --- 00:19:05.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.654 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:05.654 16:15:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:19:05.654 00:19:05.654 --- 10.0.0.1 ping statistics --- 00:19:05.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.654 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:19:05.654 16:15:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.654 16:15:06 -- nvmf/common.sh@411 -- # return 0 00:19:05.654 16:15:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:05.654 16:15:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.654 16:15:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:05.654 16:15:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.654 16:15:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:05.654 16:15:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:05.654 16:15:06 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:05.654 16:15:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:05.654 16:15:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:05.654 16:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.914 16:15:06 -- nvmf/common.sh@470 -- # nvmfpid=3452374 00:19:05.914 16:15:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.914 16:15:06 -- nvmf/common.sh@471 -- # waitforlisten 3452374 00:19:05.914 16:15:06 -- common/autotest_common.sh@817 -- # '[' -z 3452374 ']' 00:19:05.914 16:15:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.914 16:15:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.914 16:15:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.914 16:15:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.914 16:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.914 [2024-04-24 16:15:06.988703] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:19:05.914 [2024-04-24 16:15:06.988817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.914 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.914 [2024-04-24 16:15:07.062413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.914 [2024-04-24 16:15:07.172069] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.914 [2024-04-24 16:15:07.172142] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.914 [2024-04-24 16:15:07.172157] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.914 [2024-04-24 16:15:07.172170] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.914 [2024-04-24 16:15:07.172181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.914 [2024-04-24 16:15:07.172229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.172 16:15:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.172 16:15:07 -- common/autotest_common.sh@850 -- # return 0 00:19:06.172 16:15:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:06.172 16:15:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:06.172 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:06.173 16:15:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.173 16:15:07 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:06.173 16:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.173 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:06.173 [2024-04-24 16:15:07.331495] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.173 [2024-04-24 16:15:07.339677] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:06.173 null0 00:19:06.173 [2024-04-24 16:15:07.371633] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.173 16:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.173 16:15:07 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3452579 00:19:06.173 16:15:07 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:06.173 16:15:07 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3452579 /tmp/host.sock 00:19:06.173 16:15:07 -- common/autotest_common.sh@817 -- # '[' -z 3452579 ']' 00:19:06.173 16:15:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:06.173 16:15:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:06.173 16:15:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:06.173 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:06.173 16:15:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:06.173 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:06.173 [2024-04-24 16:15:07.435882] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:19:06.173 [2024-04-24 16:15:07.435963] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452579 ] 00:19:06.435 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.435 [2024-04-24 16:15:07.500520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.435 [2024-04-24 16:15:07.612475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.435 16:15:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.435 16:15:07 -- common/autotest_common.sh@850 -- # return 0 00:19:06.435 16:15:07 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.435 16:15:07 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:06.435 16:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.435 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:06.435 16:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.435 16:15:07 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:06.435 16:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.435 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:06.723 16:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.723 16:15:07 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:06.723 16:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.723 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:19:07.681 [2024-04-24 16:15:08.779335] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:07.681 [2024-04-24 16:15:08.779370] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:07.681 [2024-04-24 16:15:08.779394] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:07.681 [2024-04-24 16:15:08.907829] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:07.939 [2024-04-24 16:15:09.009363] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:07.939 [2024-04-24 16:15:09.009430] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:07.939 [2024-04-24 16:15:09.009474] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:07.939 [2024-04-24 16:15:09.009501] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:07.939 [2024-04-24 16:15:09.009538] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:07.939 16:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:07.939 16:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.939 16:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:07.939 [2024-04-24 16:15:09.017766] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd2f280 was disconnected and freed. delete nvme_qpair. 00:19:07.939 16:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:07.939 16:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.939 16:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:07.939 16:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:07.939 16:15:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:08.873 16:15:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:08.873 16:15:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:08.873 16:15:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:08.873 16:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.873 16:15:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:08.873 16:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:08.873 16:15:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:09.131 16:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.131 16:15:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:09.131 16:15:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:10.069 16:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.069 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:10.069 16:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:10.069 16:15:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:11.007 16:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.007 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:11.007 16:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:11.007 16:15:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:12.385 16:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.385 16:15:13 -- common/autotest_common.sh@10 -- # set +x 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:12.385 16:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:12.385 16:15:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:13.323 16:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.323 16:15:14 -- common/autotest_common.sh@10 -- # set +x 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:13.323 16:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:13.323 16:15:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:13.323 [2024-04-24 16:15:14.450571] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:13.323 [2024-04-24 16:15:14.450638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.323 [2024-04-24 16:15:14.450662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.323 [2024-04-24 16:15:14.450682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.323 [2024-04-24 16:15:14.450698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.323 [2024-04-24 16:15:14.450713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.323 [2024-04-24 16:15:14.450729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.323 [2024-04-24 16:15:14.450752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.323 [2024-04-24 16:15:14.450784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.323 [2024-04-24 16:15:14.450807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.323 [2024-04-24 16:15:14.450821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.323 [2024-04-24 16:15:14.450834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf57a0 is same with the state(5) to be set 00:19:13.323 [2024-04-24 16:15:14.460589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf57a0 (9): Bad file descriptor 00:19:13.323 [2024-04-24 16:15:14.470638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:14.262 16:15:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:14.262 16:15:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.262 16:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.262 16:15:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:14.262 16:15:15 -- common/autotest_common.sh@10 -- # set +x 00:19:14.262 16:15:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:14.262 16:15:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:14.262 [2024-04-24 16:15:15.502783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:15.638 [2024-04-24 16:15:16.526779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:15.638 [2024-04-24 16:15:16.526829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf57a0 with addr=10.0.0.2, port=4420 00:19:15.639 [2024-04-24 16:15:16.526856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf57a0 is same with the state(5) to be set 00:19:15.639 [2024-04-24 16:15:16.527324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf57a0 (9): Bad file descriptor 00:19:15.639 [2024-04-24 16:15:16.527371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.639 [2024-04-24 16:15:16.527414] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:15.639 [2024-04-24 16:15:16.527454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.639 [2024-04-24 16:15:16.527478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.639 [2024-04-24 16:15:16.527498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.639 [2024-04-24 16:15:16.527513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.639 [2024-04-24 16:15:16.527529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.639 [2024-04-24 16:15:16.527544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.639 [2024-04-24 16:15:16.527559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.639 [2024-04-24 16:15:16.527574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.639 [2024-04-24 16:15:16.527590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.639 [2024-04-24 16:15:16.527605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.639 [2024-04-24 16:15:16.527619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:15.639 [2024-04-24 16:15:16.527892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5bb0 (9): Bad file descriptor 00:19:15.639 [2024-04-24 16:15:16.528908] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:15.639 [2024-04-24 16:15:16.528936] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:15.639 16:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.639 16:15:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:15.639 16:15:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:16.575 16:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.575 16:15:17 -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:16.575 16:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:16.575 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.576 16:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.576 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:16.576 16:15:17 -- common/autotest_common.sh@10 -- # set +x 00:19:16.576 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:16.576 16:15:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:16.576 16:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.576 16:15:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:16.576 16:15:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:17.516 [2024-04-24 16:15:18.544931] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:17.516 [2024-04-24 16:15:18.544965] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:17.516 [2024-04-24 16:15:18.544987] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:17.516 [2024-04-24 16:15:18.632279] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:17.516 16:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.516 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:17.516 16:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:17.516 16:15:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:17.775 [2024-04-24 16:15:18.859940] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:17.775 [2024-04-24 16:15:18.859989] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:17.775 [2024-04-24 16:15:18.860038] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:17.775 [2024-04-24 16:15:18.860062] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:17.775 [2024-04-24 16:15:18.860076] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:17.775 [2024-04-24 16:15:18.863956] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd39710 was disconnected and freed. delete nvme_qpair. 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.712 16:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.712 16:15:19 -- common/autotest_common.sh@10 -- # set +x 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:18.712 16:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:18.712 16:15:19 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3452579 00:19:18.712 16:15:19 -- common/autotest_common.sh@936 -- # '[' -z 3452579 ']' 00:19:18.712 16:15:19 -- common/autotest_common.sh@940 -- # kill -0 3452579 00:19:18.712 16:15:19 -- common/autotest_common.sh@941 -- # uname 00:19:18.712 16:15:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.712 16:15:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452579 00:19:18.712 16:15:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:18.712 16:15:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:18.712 16:15:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452579' 00:19:18.712 killing process with pid 3452579 00:19:18.712 16:15:19 -- common/autotest_common.sh@955 -- # kill 3452579 00:19:18.712 16:15:19 -- common/autotest_common.sh@960 -- # wait 3452579 00:19:18.970 16:15:20 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:18.970 16:15:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:18.970 16:15:20 -- nvmf/common.sh@117 -- # sync 00:19:18.970 16:15:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.970 16:15:20 -- nvmf/common.sh@120 -- # set +e 00:19:18.970 16:15:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.970 16:15:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.970 rmmod nvme_tcp 00:19:18.970 rmmod nvme_fabrics 00:19:18.970 rmmod nvme_keyring 00:19:18.970 16:15:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.970 16:15:20 -- nvmf/common.sh@124 -- # set -e 00:19:18.970 16:15:20 -- nvmf/common.sh@125 -- # return 0 00:19:18.970 16:15:20 -- nvmf/common.sh@478 -- # '[' -n 3452374 ']' 00:19:18.970 16:15:20 -- nvmf/common.sh@479 -- # killprocess 3452374 00:19:18.970 16:15:20 -- common/autotest_common.sh@936 -- # '[' -z 3452374 ']' 00:19:18.970 16:15:20 -- common/autotest_common.sh@940 -- # kill -0 3452374 00:19:18.970 16:15:20 -- common/autotest_common.sh@941 -- # uname 00:19:18.970 16:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.970 16:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452374 00:19:18.970 16:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:18.970 16:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:18.970 16:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452374' 00:19:18.970 killing process with pid 3452374 00:19:18.970 16:15:20 -- common/autotest_common.sh@955 -- # kill 3452374 00:19:18.970 16:15:20 -- common/autotest_common.sh@960 -- # wait 3452374 00:19:19.228 16:15:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:19.228 16:15:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:19.228 16:15:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:19.229 16:15:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.229 16:15:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.229 16:15:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.229 16:15:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.229 16:15:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.133 16:15:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.133 00:19:21.133 real 0m17.636s 00:19:21.133 user 0m24.555s 00:19:21.133 sys 0m2.935s 00:19:21.133 16:15:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.133 16:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:21.133 ************************************ 00:19:21.133 END TEST nvmf_discovery_remove_ifc 00:19:21.133 ************************************ 00:19:21.393 16:15:22 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:21.393 16:15:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:21.393 16:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.393 16:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:21.393 ************************************ 00:19:21.393 START TEST nvmf_identify_kernel_target 00:19:21.393 ************************************ 00:19:21.393 16:15:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:21.393 * Looking for test storage... 00:19:21.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:21.393 16:15:22 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.393 16:15:22 -- nvmf/common.sh@7 -- # uname -s 00:19:21.393 16:15:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.393 16:15:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.393 16:15:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.393 16:15:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.393 16:15:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.393 16:15:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.393 16:15:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.393 16:15:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.393 16:15:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.393 16:15:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.393 16:15:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.393 16:15:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.393 16:15:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.393 16:15:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.393 16:15:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.393 16:15:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.393 16:15:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.393 16:15:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.393 16:15:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.393 16:15:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.393 16:15:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.393 16:15:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.393 16:15:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.393 16:15:22 -- paths/export.sh@5 -- # export PATH 00:19:21.393 16:15:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.393 16:15:22 -- nvmf/common.sh@47 -- # : 0 00:19:21.393 16:15:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.393 16:15:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.393 16:15:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.393 16:15:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.393 16:15:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.393 16:15:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.393 16:15:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.393 16:15:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.393 16:15:22 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:21.393 16:15:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:21.393 16:15:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.393 16:15:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:21.393 16:15:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:21.393 16:15:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:21.393 16:15:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.393 16:15:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.393 16:15:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.393 16:15:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:21.393 16:15:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:21.393 16:15:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.393 16:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:23.301 16:15:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.301 16:15:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.301 16:15:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.301 16:15:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.301 16:15:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.301 16:15:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.301 16:15:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.301 16:15:24 -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.301 16:15:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.301 16:15:24 -- nvmf/common.sh@296 -- # e810=() 00:19:23.301 16:15:24 -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.301 16:15:24 -- nvmf/common.sh@297 -- # x722=() 00:19:23.301 16:15:24 -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.301 16:15:24 -- nvmf/common.sh@298 -- # mlx=() 00:19:23.301 16:15:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.301 16:15:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.301 16:15:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.301 16:15:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.301 16:15:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.301 16:15:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:23.301 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:23.301 16:15:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.301 16:15:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:23.301 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:23.301 16:15:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.301 16:15:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.301 16:15:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.301 16:15:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:23.301 Found net devices under 0000:09:00.0: cvl_0_0 00:19:23.301 16:15:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.301 16:15:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.301 16:15:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.301 16:15:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.301 16:15:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:23.301 Found net devices under 0000:09:00.1: cvl_0_1 00:19:23.301 16:15:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.301 16:15:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:23.301 16:15:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:23.301 16:15:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:23.301 16:15:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.301 16:15:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.301 16:15:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.301 16:15:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.301 16:15:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.301 16:15:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.301 16:15:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.301 16:15:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.301 16:15:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.301 16:15:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.301 16:15:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.301 16:15:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.301 16:15:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.301 16:15:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.301 16:15:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.301 16:15:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.302 16:15:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.562 16:15:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.562 16:15:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.562 16:15:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:23.562 00:19:23.562 --- 10.0.0.2 ping statistics --- 00:19:23.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.562 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:23.562 16:15:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:23.562 00:19:23.562 --- 10.0.0.1 ping statistics --- 00:19:23.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.562 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:23.562 16:15:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.562 16:15:24 -- nvmf/common.sh@411 -- # return 0 00:19:23.562 16:15:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:23.562 16:15:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.562 16:15:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.562 16:15:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:23.562 16:15:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:23.562 16:15:24 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:23.562 16:15:24 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:23.562 16:15:24 -- nvmf/common.sh@717 -- # local ip 00:19:23.562 16:15:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.562 16:15:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.562 16:15:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.562 16:15:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.562 16:15:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.562 16:15:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.562 16:15:24 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:23.562 16:15:24 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:23.562 16:15:24 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:23.562 16:15:24 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:23.562 16:15:24 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:23.562 16:15:24 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:23.562 16:15:24 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:23.562 16:15:24 -- nvmf/common.sh@628 -- # local block nvme 00:19:23.562 16:15:24 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:23.562 16:15:24 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:23.563 16:15:24 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:23.563 16:15:24 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:24.499 Waiting for block devices as requested 00:19:24.760 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:24.760 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:24.760 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:24.760 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:25.020 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:25.020 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:25.020 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:25.020 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:25.282 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:19:25.282 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:25.282 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:25.282 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:25.542 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:25.542 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:25.542 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:25.802 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:25.802 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:25.802 16:15:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:25.802 16:15:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:25.802 16:15:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:25.802 16:15:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:25.802 16:15:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:25.802 16:15:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:25.802 16:15:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:25.802 16:15:26 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:25.802 16:15:26 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:25.802 No valid GPT data, bailing 00:19:25.802 16:15:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:25.802 16:15:27 -- scripts/common.sh@391 -- # pt= 00:19:25.802 16:15:27 -- scripts/common.sh@392 -- # return 1 00:19:25.802 16:15:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:25.802 16:15:27 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:25.802 16:15:27 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:25.802 16:15:27 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:25.802 16:15:27 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:25.802 16:15:27 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:25.802 16:15:27 -- nvmf/common.sh@656 -- # echo 1 00:19:25.802 16:15:27 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:25.802 16:15:27 -- nvmf/common.sh@658 -- # echo 1 00:19:25.802 16:15:27 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:25.802 16:15:27 -- nvmf/common.sh@661 -- # echo tcp 00:19:25.802 16:15:27 -- nvmf/common.sh@662 -- # echo 4420 00:19:25.802 16:15:27 -- nvmf/common.sh@663 -- # echo ipv4 00:19:25.802 16:15:27 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:25.802 16:15:27 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:19:25.802 00:19:25.802 Discovery Log Number of Records 2, Generation counter 2 00:19:25.802 =====Discovery Log Entry 0====== 00:19:25.802 trtype: tcp 00:19:25.802 adrfam: ipv4 00:19:25.802 subtype: current discovery subsystem 00:19:25.802 treq: not specified, sq flow control disable supported 00:19:25.802 portid: 1 00:19:25.802 trsvcid: 4420 00:19:25.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:25.802 traddr: 10.0.0.1 00:19:25.802 eflags: none 00:19:25.802 sectype: none 00:19:25.802 =====Discovery Log Entry 1====== 00:19:25.802 trtype: tcp 00:19:25.802 adrfam: ipv4 00:19:25.802 subtype: nvme subsystem 00:19:25.803 treq: not specified, sq flow control disable supported 00:19:25.803 portid: 1 00:19:25.803 trsvcid: 4420 00:19:25.803 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:25.803 traddr: 10.0.0.1 00:19:25.803 eflags: none 00:19:25.803 sectype: none 00:19:25.803 16:15:27 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:25.803 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:26.064 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.064 ===================================================== 00:19:26.064 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:26.064 ===================================================== 00:19:26.064 Controller Capabilities/Features 00:19:26.064 ================================ 00:19:26.064 Vendor ID: 0000 00:19:26.064 Subsystem Vendor ID: 0000 00:19:26.064 Serial Number: 89c5f068da6fb19e09c9 00:19:26.064 Model Number: Linux 00:19:26.064 Firmware Version: 6.7.0-68 00:19:26.064 Recommended Arb Burst: 0 00:19:26.064 IEEE OUI Identifier: 00 00 00 00:19:26.064 Multi-path I/O 00:19:26.064 May have multiple subsystem ports: No 00:19:26.064 May have multiple controllers: No 00:19:26.064 Associated with SR-IOV VF: No 00:19:26.064 Max Data Transfer Size: Unlimited 00:19:26.064 Max Number of Namespaces: 0 00:19:26.064 Max Number of I/O Queues: 1024 00:19:26.064 NVMe Specification Version (VS): 1.3 00:19:26.064 NVMe Specification Version (Identify): 1.3 00:19:26.064 Maximum Queue Entries: 1024 00:19:26.064 Contiguous Queues Required: No 00:19:26.064 Arbitration Mechanisms Supported 00:19:26.064 Weighted Round Robin: Not Supported 00:19:26.064 Vendor Specific: Not Supported 00:19:26.064 Reset Timeout: 7500 ms 00:19:26.064 Doorbell Stride: 4 bytes 00:19:26.064 NVM Subsystem Reset: Not Supported 00:19:26.064 Command Sets Supported 00:19:26.064 NVM Command Set: Supported 00:19:26.064 Boot Partition: Not Supported 00:19:26.064 Memory Page Size Minimum: 4096 bytes 00:19:26.064 Memory Page Size Maximum: 4096 bytes 00:19:26.064 Persistent Memory Region: Not Supported 00:19:26.064 Optional Asynchronous Events Supported 00:19:26.065 Namespace Attribute Notices: Not Supported 00:19:26.065 Firmware Activation Notices: Not Supported 00:19:26.065 ANA Change Notices: Not Supported 00:19:26.065 PLE Aggregate Log Change Notices: Not Supported 00:19:26.065 LBA Status Info Alert Notices: Not Supported 00:19:26.065 EGE Aggregate Log Change Notices: Not Supported 00:19:26.065 Normal NVM Subsystem Shutdown event: Not Supported 00:19:26.065 Zone Descriptor Change Notices: Not Supported 00:19:26.065 Discovery Log Change Notices: Supported 00:19:26.065 Controller Attributes 00:19:26.065 128-bit Host Identifier: Not Supported 00:19:26.065 Non-Operational Permissive Mode: Not Supported 00:19:26.065 NVM Sets: Not Supported 00:19:26.065 Read Recovery Levels: Not Supported 00:19:26.065 Endurance Groups: Not Supported 00:19:26.065 Predictable Latency Mode: Not Supported 00:19:26.065 Traffic Based Keep ALive: Not Supported 00:19:26.065 Namespace Granularity: Not Supported 00:19:26.065 SQ Associations: Not Supported 00:19:26.065 UUID List: Not Supported 00:19:26.065 Multi-Domain Subsystem: Not Supported 00:19:26.065 Fixed Capacity Management: Not Supported 00:19:26.065 Variable Capacity Management: Not Supported 00:19:26.065 Delete Endurance Group: Not Supported 00:19:26.065 Delete NVM Set: Not Supported 00:19:26.065 Extended LBA Formats Supported: Not Supported 00:19:26.065 Flexible Data Placement Supported: Not Supported 00:19:26.065 00:19:26.065 Controller Memory Buffer Support 00:19:26.065 ================================ 00:19:26.065 Supported: No 00:19:26.065 00:19:26.065 Persistent Memory Region Support 00:19:26.065 ================================ 00:19:26.065 Supported: No 00:19:26.065 00:19:26.065 Admin Command Set Attributes 00:19:26.065 ============================ 00:19:26.065 Security Send/Receive: Not Supported 00:19:26.065 Format NVM: Not Supported 00:19:26.065 Firmware Activate/Download: Not Supported 00:19:26.065 Namespace Management: Not Supported 00:19:26.065 Device Self-Test: Not Supported 00:19:26.065 Directives: Not Supported 00:19:26.065 NVMe-MI: Not Supported 00:19:26.065 Virtualization Management: Not Supported 00:19:26.065 Doorbell Buffer Config: Not Supported 00:19:26.065 Get LBA Status Capability: Not Supported 00:19:26.065 Command & Feature Lockdown Capability: Not Supported 00:19:26.065 Abort Command Limit: 1 00:19:26.065 Async Event Request Limit: 1 00:19:26.065 Number of Firmware Slots: N/A 00:19:26.065 Firmware Slot 1 Read-Only: N/A 00:19:26.065 Firmware Activation Without Reset: N/A 00:19:26.065 Multiple Update Detection Support: N/A 00:19:26.065 Firmware Update Granularity: No Information Provided 00:19:26.065 Per-Namespace SMART Log: No 00:19:26.065 Asymmetric Namespace Access Log Page: Not Supported 00:19:26.065 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:26.065 Command Effects Log Page: Not Supported 00:19:26.065 Get Log Page Extended Data: Supported 00:19:26.065 Telemetry Log Pages: Not Supported 00:19:26.065 Persistent Event Log Pages: Not Supported 00:19:26.065 Supported Log Pages Log Page: May Support 00:19:26.065 Commands Supported & Effects Log Page: Not Supported 00:19:26.065 Feature Identifiers & Effects Log Page:May Support 00:19:26.065 NVMe-MI Commands & Effects Log Page: May Support 00:19:26.065 Data Area 4 for Telemetry Log: Not Supported 00:19:26.065 Error Log Page Entries Supported: 1 00:19:26.065 Keep Alive: Not Supported 00:19:26.065 00:19:26.065 NVM Command Set Attributes 00:19:26.065 ========================== 00:19:26.065 Submission Queue Entry Size 00:19:26.065 Max: 1 00:19:26.065 Min: 1 00:19:26.065 Completion Queue Entry Size 00:19:26.065 Max: 1 00:19:26.065 Min: 1 00:19:26.065 Number of Namespaces: 0 00:19:26.065 Compare Command: Not Supported 00:19:26.065 Write Uncorrectable Command: Not Supported 00:19:26.065 Dataset Management Command: Not Supported 00:19:26.065 Write Zeroes Command: Not Supported 00:19:26.065 Set Features Save Field: Not Supported 00:19:26.065 Reservations: Not Supported 00:19:26.065 Timestamp: Not Supported 00:19:26.065 Copy: Not Supported 00:19:26.065 Volatile Write Cache: Not Present 00:19:26.065 Atomic Write Unit (Normal): 1 00:19:26.065 Atomic Write Unit (PFail): 1 00:19:26.065 Atomic Compare & Write Unit: 1 00:19:26.065 Fused Compare & Write: Not Supported 00:19:26.065 Scatter-Gather List 00:19:26.065 SGL Command Set: Supported 00:19:26.065 SGL Keyed: Not Supported 00:19:26.065 SGL Bit Bucket Descriptor: Not Supported 00:19:26.065 SGL Metadata Pointer: Not Supported 00:19:26.065 Oversized SGL: Not Supported 00:19:26.065 SGL Metadata Address: Not Supported 00:19:26.066 SGL Offset: Supported 00:19:26.066 Transport SGL Data Block: Not Supported 00:19:26.066 Replay Protected Memory Block: Not Supported 00:19:26.066 00:19:26.066 Firmware Slot Information 00:19:26.066 ========================= 00:19:26.066 Active slot: 0 00:19:26.066 00:19:26.066 00:19:26.066 Error Log 00:19:26.066 ========= 00:19:26.066 00:19:26.066 Active Namespaces 00:19:26.066 ================= 00:19:26.066 Discovery Log Page 00:19:26.066 ================== 00:19:26.066 Generation Counter: 2 00:19:26.066 Number of Records: 2 00:19:26.066 Record Format: 0 00:19:26.066 00:19:26.066 Discovery Log Entry 0 00:19:26.066 ---------------------- 00:19:26.066 Transport Type: 3 (TCP) 00:19:26.066 Address Family: 1 (IPv4) 00:19:26.066 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:26.066 Entry Flags: 00:19:26.066 Duplicate Returned Information: 0 00:19:26.066 Explicit Persistent Connection Support for Discovery: 0 00:19:26.066 Transport Requirements: 00:19:26.066 Secure Channel: Not Specified 00:19:26.066 Port ID: 1 (0x0001) 00:19:26.066 Controller ID: 65535 (0xffff) 00:19:26.066 Admin Max SQ Size: 32 00:19:26.066 Transport Service Identifier: 4420 00:19:26.066 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:26.066 Transport Address: 10.0.0.1 00:19:26.066 Discovery Log Entry 1 00:19:26.066 ---------------------- 00:19:26.066 Transport Type: 3 (TCP) 00:19:26.066 Address Family: 1 (IPv4) 00:19:26.066 Subsystem Type: 2 (NVM Subsystem) 00:19:26.066 Entry Flags: 00:19:26.066 Duplicate Returned Information: 0 00:19:26.066 Explicit Persistent Connection Support for Discovery: 0 00:19:26.066 Transport Requirements: 00:19:26.066 Secure Channel: Not Specified 00:19:26.066 Port ID: 1 (0x0001) 00:19:26.066 Controller ID: 65535 (0xffff) 00:19:26.066 Admin Max SQ Size: 32 00:19:26.066 Transport Service Identifier: 4420 00:19:26.066 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:26.066 Transport Address: 10.0.0.1 00:19:26.066 16:15:27 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:26.066 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.066 get_feature(0x01) failed 00:19:26.066 get_feature(0x02) failed 00:19:26.066 get_feature(0x04) failed 00:19:26.066 ===================================================== 00:19:26.066 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:26.066 ===================================================== 00:19:26.066 Controller Capabilities/Features 00:19:26.066 ================================ 00:19:26.066 Vendor ID: 0000 00:19:26.066 Subsystem Vendor ID: 0000 00:19:26.066 Serial Number: 31a28ced46f1bd518972 00:19:26.066 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:26.066 Firmware Version: 6.7.0-68 00:19:26.066 Recommended Arb Burst: 6 00:19:26.066 IEEE OUI Identifier: 00 00 00 00:19:26.066 Multi-path I/O 00:19:26.066 May have multiple subsystem ports: Yes 00:19:26.066 May have multiple controllers: Yes 00:19:26.066 Associated with SR-IOV VF: No 00:19:26.066 Max Data Transfer Size: Unlimited 00:19:26.066 Max Number of Namespaces: 1024 00:19:26.066 Max Number of I/O Queues: 128 00:19:26.066 NVMe Specification Version (VS): 1.3 00:19:26.066 NVMe Specification Version (Identify): 1.3 00:19:26.066 Maximum Queue Entries: 1024 00:19:26.066 Contiguous Queues Required: No 00:19:26.066 Arbitration Mechanisms Supported 00:19:26.066 Weighted Round Robin: Not Supported 00:19:26.066 Vendor Specific: Not Supported 00:19:26.066 Reset Timeout: 7500 ms 00:19:26.066 Doorbell Stride: 4 bytes 00:19:26.066 NVM Subsystem Reset: Not Supported 00:19:26.066 Command Sets Supported 00:19:26.066 NVM Command Set: Supported 00:19:26.066 Boot Partition: Not Supported 00:19:26.066 Memory Page Size Minimum: 4096 bytes 00:19:26.066 Memory Page Size Maximum: 4096 bytes 00:19:26.066 Persistent Memory Region: Not Supported 00:19:26.066 Optional Asynchronous Events Supported 00:19:26.066 Namespace Attribute Notices: Supported 00:19:26.066 Firmware Activation Notices: Not Supported 00:19:26.066 ANA Change Notices: Supported 00:19:26.066 PLE Aggregate Log Change Notices: Not Supported 00:19:26.066 LBA Status Info Alert Notices: Not Supported 00:19:26.066 EGE Aggregate Log Change Notices: Not Supported 00:19:26.066 Normal NVM Subsystem Shutdown event: Not Supported 00:19:26.066 Zone Descriptor Change Notices: Not Supported 00:19:26.066 Discovery Log Change Notices: Not Supported 00:19:26.066 Controller Attributes 00:19:26.066 128-bit Host Identifier: Supported 00:19:26.067 Non-Operational Permissive Mode: Not Supported 00:19:26.067 NVM Sets: Not Supported 00:19:26.067 Read Recovery Levels: Not Supported 00:19:26.067 Endurance Groups: Not Supported 00:19:26.067 Predictable Latency Mode: Not Supported 00:19:26.067 Traffic Based Keep ALive: Supported 00:19:26.067 Namespace Granularity: Not Supported 00:19:26.067 SQ Associations: Not Supported 00:19:26.067 UUID List: Not Supported 00:19:26.067 Multi-Domain Subsystem: Not Supported 00:19:26.067 Fixed Capacity Management: Not Supported 00:19:26.067 Variable Capacity Management: Not Supported 00:19:26.067 Delete Endurance Group: Not Supported 00:19:26.067 Delete NVM Set: Not Supported 00:19:26.067 Extended LBA Formats Supported: Not Supported 00:19:26.067 Flexible Data Placement Supported: Not Supported 00:19:26.067 00:19:26.067 Controller Memory Buffer Support 00:19:26.067 ================================ 00:19:26.067 Supported: No 00:19:26.067 00:19:26.067 Persistent Memory Region Support 00:19:26.067 ================================ 00:19:26.067 Supported: No 00:19:26.067 00:19:26.067 Admin Command Set Attributes 00:19:26.067 ============================ 00:19:26.067 Security Send/Receive: Not Supported 00:19:26.067 Format NVM: Not Supported 00:19:26.067 Firmware Activate/Download: Not Supported 00:19:26.067 Namespace Management: Not Supported 00:19:26.067 Device Self-Test: Not Supported 00:19:26.067 Directives: Not Supported 00:19:26.067 NVMe-MI: Not Supported 00:19:26.067 Virtualization Management: Not Supported 00:19:26.067 Doorbell Buffer Config: Not Supported 00:19:26.067 Get LBA Status Capability: Not Supported 00:19:26.067 Command & Feature Lockdown Capability: Not Supported 00:19:26.067 Abort Command Limit: 4 00:19:26.067 Async Event Request Limit: 4 00:19:26.067 Number of Firmware Slots: N/A 00:19:26.067 Firmware Slot 1 Read-Only: N/A 00:19:26.067 Firmware Activation Without Reset: N/A 00:19:26.067 Multiple Update Detection Support: N/A 00:19:26.067 Firmware Update Granularity: No Information Provided 00:19:26.067 Per-Namespace SMART Log: Yes 00:19:26.067 Asymmetric Namespace Access Log Page: Supported 00:19:26.067 ANA Transition Time : 10 sec 00:19:26.067 00:19:26.067 Asymmetric Namespace Access Capabilities 00:19:26.067 ANA Optimized State : Supported 00:19:26.067 ANA Non-Optimized State : Supported 00:19:26.067 ANA Inaccessible State : Supported 00:19:26.067 ANA Persistent Loss State : Supported 00:19:26.067 ANA Change State : Supported 00:19:26.067 ANAGRPID is not changed : No 00:19:26.067 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:26.067 00:19:26.067 ANA Group Identifier Maximum : 128 00:19:26.067 Number of ANA Group Identifiers : 128 00:19:26.067 Max Number of Allowed Namespaces : 1024 00:19:26.067 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:26.067 Command Effects Log Page: Supported 00:19:26.067 Get Log Page Extended Data: Supported 00:19:26.067 Telemetry Log Pages: Not Supported 00:19:26.067 Persistent Event Log Pages: Not Supported 00:19:26.067 Supported Log Pages Log Page: May Support 00:19:26.067 Commands Supported & Effects Log Page: Not Supported 00:19:26.067 Feature Identifiers & Effects Log Page:May Support 00:19:26.067 NVMe-MI Commands & Effects Log Page: May Support 00:19:26.067 Data Area 4 for Telemetry Log: Not Supported 00:19:26.067 Error Log Page Entries Supported: 128 00:19:26.067 Keep Alive: Supported 00:19:26.067 Keep Alive Granularity: 1000 ms 00:19:26.067 00:19:26.067 NVM Command Set Attributes 00:19:26.067 ========================== 00:19:26.067 Submission Queue Entry Size 00:19:26.067 Max: 64 00:19:26.067 Min: 64 00:19:26.067 Completion Queue Entry Size 00:19:26.067 Max: 16 00:19:26.067 Min: 16 00:19:26.067 Number of Namespaces: 1024 00:19:26.067 Compare Command: Not Supported 00:19:26.067 Write Uncorrectable Command: Not Supported 00:19:26.067 Dataset Management Command: Supported 00:19:26.067 Write Zeroes Command: Supported 00:19:26.067 Set Features Save Field: Not Supported 00:19:26.067 Reservations: Not Supported 00:19:26.067 Timestamp: Not Supported 00:19:26.067 Copy: Not Supported 00:19:26.067 Volatile Write Cache: Present 00:19:26.067 Atomic Write Unit (Normal): 1 00:19:26.067 Atomic Write Unit (PFail): 1 00:19:26.067 Atomic Compare & Write Unit: 1 00:19:26.067 Fused Compare & Write: Not Supported 00:19:26.067 Scatter-Gather List 00:19:26.067 SGL Command Set: Supported 00:19:26.067 SGL Keyed: Not Supported 00:19:26.067 SGL Bit Bucket Descriptor: Not Supported 00:19:26.067 SGL Metadata Pointer: Not Supported 00:19:26.067 Oversized SGL: Not Supported 00:19:26.067 SGL Metadata Address: Not Supported 00:19:26.067 SGL Offset: Supported 00:19:26.067 Transport SGL Data Block: Not Supported 00:19:26.067 Replay Protected Memory Block: Not Supported 00:19:26.067 00:19:26.067 Firmware Slot Information 00:19:26.067 ========================= 00:19:26.067 Active slot: 0 00:19:26.067 00:19:26.067 Asymmetric Namespace Access 00:19:26.067 =========================== 00:19:26.067 Change Count : 0 00:19:26.067 Number of ANA Group Descriptors : 1 00:19:26.067 ANA Group Descriptor : 0 00:19:26.067 ANA Group ID : 1 00:19:26.067 Number of NSID Values : 1 00:19:26.067 Change Count : 0 00:19:26.068 ANA State : 1 00:19:26.068 Namespace Identifier : 1 00:19:26.068 00:19:26.068 Commands Supported and Effects 00:19:26.068 ============================== 00:19:26.068 Admin Commands 00:19:26.068 -------------- 00:19:26.068 Get Log Page (02h): Supported 00:19:26.068 Identify (06h): Supported 00:19:26.068 Abort (08h): Supported 00:19:26.068 Set Features (09h): Supported 00:19:26.068 Get Features (0Ah): Supported 00:19:26.068 Asynchronous Event Request (0Ch): Supported 00:19:26.068 Keep Alive (18h): Supported 00:19:26.068 I/O Commands 00:19:26.068 ------------ 00:19:26.068 Flush (00h): Supported 00:19:26.068 Write (01h): Supported LBA-Change 00:19:26.068 Read (02h): Supported 00:19:26.068 Write Zeroes (08h): Supported LBA-Change 00:19:26.068 Dataset Management (09h): Supported 00:19:26.068 00:19:26.068 Error Log 00:19:26.068 ========= 00:19:26.068 Entry: 0 00:19:26.068 Error Count: 0x3 00:19:26.068 Submission Queue Id: 0x0 00:19:26.068 Command Id: 0x5 00:19:26.068 Phase Bit: 0 00:19:26.068 Status Code: 0x2 00:19:26.068 Status Code Type: 0x0 00:19:26.068 Do Not Retry: 1 00:19:26.068 Error Location: 0x28 00:19:26.068 LBA: 0x0 00:19:26.068 Namespace: 0x0 00:19:26.068 Vendor Log Page: 0x0 00:19:26.068 ----------- 00:19:26.068 Entry: 1 00:19:26.068 Error Count: 0x2 00:19:26.068 Submission Queue Id: 0x0 00:19:26.068 Command Id: 0x5 00:19:26.068 Phase Bit: 0 00:19:26.068 Status Code: 0x2 00:19:26.068 Status Code Type: 0x0 00:19:26.068 Do Not Retry: 1 00:19:26.068 Error Location: 0x28 00:19:26.068 LBA: 0x0 00:19:26.068 Namespace: 0x0 00:19:26.068 Vendor Log Page: 0x0 00:19:26.068 ----------- 00:19:26.068 Entry: 2 00:19:26.068 Error Count: 0x1 00:19:26.068 Submission Queue Id: 0x0 00:19:26.068 Command Id: 0x4 00:19:26.068 Phase Bit: 0 00:19:26.068 Status Code: 0x2 00:19:26.068 Status Code Type: 0x0 00:19:26.068 Do Not Retry: 1 00:19:26.068 Error Location: 0x28 00:19:26.068 LBA: 0x0 00:19:26.068 Namespace: 0x0 00:19:26.068 Vendor Log Page: 0x0 00:19:26.068 00:19:26.068 Number of Queues 00:19:26.068 ================ 00:19:26.068 Number of I/O Submission Queues: 128 00:19:26.068 Number of I/O Completion Queues: 128 00:19:26.068 00:19:26.068 ZNS Specific Controller Data 00:19:26.068 ============================ 00:19:26.068 Zone Append Size Limit: 0 00:19:26.068 00:19:26.068 00:19:26.068 Active Namespaces 00:19:26.068 ================= 00:19:26.068 get_feature(0x05) failed 00:19:26.068 Namespace ID:1 00:19:26.068 Command Set Identifier: NVM (00h) 00:19:26.068 Deallocate: Supported 00:19:26.068 Deallocated/Unwritten Error: Not Supported 00:19:26.068 Deallocated Read Value: Unknown 00:19:26.068 Deallocate in Write Zeroes: Not Supported 00:19:26.068 Deallocated Guard Field: 0xFFFF 00:19:26.068 Flush: Supported 00:19:26.068 Reservation: Not Supported 00:19:26.068 Namespace Sharing Capabilities: Multiple Controllers 00:19:26.068 Size (in LBAs): 1953525168 (931GiB) 00:19:26.068 Capacity (in LBAs): 1953525168 (931GiB) 00:19:26.068 Utilization (in LBAs): 1953525168 (931GiB) 00:19:26.068 UUID: 81ae2fb9-b494-4d7f-8bd2-db4985e4c561 00:19:26.068 Thin Provisioning: Not Supported 00:19:26.068 Per-NS Atomic Units: Yes 00:19:26.068 Atomic Boundary Size (Normal): 0 00:19:26.068 Atomic Boundary Size (PFail): 0 00:19:26.068 Atomic Boundary Offset: 0 00:19:26.068 NGUID/EUI64 Never Reused: No 00:19:26.068 ANA group ID: 1 00:19:26.068 Namespace Write Protected: No 00:19:26.068 Number of LBA Formats: 1 00:19:26.068 Current LBA Format: LBA Format #00 00:19:26.068 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:26.068 00:19:26.068 16:15:27 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:26.068 16:15:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:26.068 16:15:27 -- nvmf/common.sh@117 -- # sync 00:19:26.068 16:15:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.068 16:15:27 -- nvmf/common.sh@120 -- # set +e 00:19:26.068 16:15:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.068 16:15:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.068 rmmod nvme_tcp 00:19:26.068 rmmod nvme_fabrics 00:19:26.069 16:15:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.069 16:15:27 -- nvmf/common.sh@124 -- # set -e 00:19:26.069 16:15:27 -- nvmf/common.sh@125 -- # return 0 00:19:26.069 16:15:27 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:26.069 16:15:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:26.069 16:15:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:26.069 16:15:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:26.069 16:15:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.069 16:15:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.069 16:15:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.069 16:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.069 16:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.976 16:15:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.976 16:15:29 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:27.976 16:15:29 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:27.976 16:15:29 -- nvmf/common.sh@675 -- # echo 0 00:19:27.976 16:15:29 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:27.976 16:15:29 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:28.235 16:15:29 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:28.235 16:15:29 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:28.235 16:15:29 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:28.235 16:15:29 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:28.235 16:15:29 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:29.171 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:29.171 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:29.171 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:30.108 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:19:30.368 00:19:30.368 real 0m8.962s 00:19:30.368 user 0m1.880s 00:19:30.368 sys 0m3.205s 00:19:30.368 16:15:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:30.368 16:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:30.368 ************************************ 00:19:30.368 END TEST nvmf_identify_kernel_target 00:19:30.368 ************************************ 00:19:30.368 16:15:31 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:30.368 16:15:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.368 16:15:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.368 16:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:30.368 ************************************ 00:19:30.368 START TEST nvmf_auth 00:19:30.368 ************************************ 00:19:30.368 16:15:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:30.627 * Looking for test storage... 00:19:30.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:30.627 16:15:31 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.627 16:15:31 -- nvmf/common.sh@7 -- # uname -s 00:19:30.627 16:15:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.627 16:15:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.627 16:15:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.627 16:15:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.627 16:15:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.627 16:15:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.627 16:15:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.627 16:15:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.627 16:15:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.627 16:15:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.627 16:15:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.627 16:15:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.627 16:15:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.627 16:15:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.627 16:15:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.627 16:15:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.627 16:15:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.627 16:15:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.627 16:15:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.627 16:15:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.627 16:15:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.627 16:15:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.627 16:15:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.627 16:15:31 -- paths/export.sh@5 -- # export PATH 00:19:30.628 16:15:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.628 16:15:31 -- nvmf/common.sh@47 -- # : 0 00:19:30.628 16:15:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.628 16:15:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.628 16:15:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.628 16:15:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.628 16:15:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.628 16:15:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.628 16:15:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.628 16:15:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.628 16:15:31 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:30.628 16:15:31 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:30.628 16:15:31 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:30.628 16:15:31 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:30.628 16:15:31 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:30.628 16:15:31 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:30.628 16:15:31 -- host/auth.sh@21 -- # keys=() 00:19:30.628 16:15:31 -- host/auth.sh@77 -- # nvmftestinit 00:19:30.628 16:15:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:30.628 16:15:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.628 16:15:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:30.628 16:15:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:30.628 16:15:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:30.628 16:15:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.628 16:15:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.628 16:15:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.628 16:15:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:30.628 16:15:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:30.628 16:15:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.628 16:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:32.531 16:15:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.531 16:15:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.531 16:15:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.531 16:15:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.531 16:15:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.531 16:15:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.531 16:15:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.531 16:15:33 -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.531 16:15:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.531 16:15:33 -- nvmf/common.sh@296 -- # e810=() 00:19:32.531 16:15:33 -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.531 16:15:33 -- nvmf/common.sh@297 -- # x722=() 00:19:32.531 16:15:33 -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.531 16:15:33 -- nvmf/common.sh@298 -- # mlx=() 00:19:32.531 16:15:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.531 16:15:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.531 16:15:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.531 16:15:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.531 16:15:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.531 16:15:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:32.531 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:32.531 16:15:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.531 16:15:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:32.531 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:32.531 16:15:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.531 16:15:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.531 16:15:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.531 16:15:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:32.531 Found net devices under 0000:09:00.0: cvl_0_0 00:19:32.531 16:15:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.531 16:15:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.531 16:15:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.531 16:15:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.531 16:15:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:32.531 Found net devices under 0000:09:00.1: cvl_0_1 00:19:32.531 16:15:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.531 16:15:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:32.531 16:15:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:32.531 16:15:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:32.531 16:15:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.531 16:15:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.531 16:15:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.531 16:15:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:32.531 16:15:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.531 16:15:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.531 16:15:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:32.531 16:15:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.531 16:15:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.531 16:15:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:32.531 16:15:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:32.531 16:15:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.531 16:15:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.531 16:15:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.531 16:15:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.531 16:15:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:32.531 16:15:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.789 16:15:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.789 16:15:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.789 16:15:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:19:32.789 00:19:32.789 --- 10.0.0.2 ping statistics --- 00:19:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.789 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:32.789 16:15:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:19:32.789 00:19:32.789 --- 10.0.0.1 ping statistics --- 00:19:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.789 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:32.789 16:15:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.789 16:15:33 -- nvmf/common.sh@411 -- # return 0 00:19:32.789 16:15:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:32.789 16:15:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.789 16:15:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:32.789 16:15:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:32.789 16:15:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.789 16:15:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:32.789 16:15:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:32.789 16:15:33 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:32.789 16:15:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:32.789 16:15:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:32.789 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:32.789 16:15:33 -- nvmf/common.sh@470 -- # nvmfpid=3460112 00:19:32.789 16:15:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:32.789 16:15:33 -- nvmf/common.sh@471 -- # waitforlisten 3460112 00:19:32.789 16:15:33 -- common/autotest_common.sh@817 -- # '[' -z 3460112 ']' 00:19:32.789 16:15:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.789 16:15:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:32.789 16:15:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.789 16:15:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:32.789 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:33.047 16:15:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:33.047 16:15:34 -- common/autotest_common.sh@850 -- # return 0 00:19:33.047 16:15:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:33.047 16:15:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:33.047 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.047 16:15:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.047 16:15:34 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:33.047 16:15:34 -- host/auth.sh@81 -- # gen_key null 32 00:19:33.047 16:15:34 -- host/auth.sh@53 -- # local digest len file key 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # local -A digests 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # digest=null 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # len=32 00:19:33.047 16:15:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.047 16:15:34 -- host/auth.sh@57 -- # key=8273c8d8f2d040d83c835bd68494e3e1 00:19:33.047 16:15:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:33.047 16:15:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.EbG 00:19:33.047 16:15:34 -- host/auth.sh@59 -- # format_dhchap_key 8273c8d8f2d040d83c835bd68494e3e1 0 00:19:33.047 16:15:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 8273c8d8f2d040d83c835bd68494e3e1 0 00:19:33.047 16:15:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # key=8273c8d8f2d040d83c835bd68494e3e1 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # digest=0 00:19:33.047 16:15:34 -- nvmf/common.sh@694 -- # python - 00:19:33.047 16:15:34 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.EbG 00:19:33.047 16:15:34 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.EbG 00:19:33.047 16:15:34 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.EbG 00:19:33.047 16:15:34 -- host/auth.sh@82 -- # gen_key null 48 00:19:33.047 16:15:34 -- host/auth.sh@53 -- # local digest len file key 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # local -A digests 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # digest=null 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # len=48 00:19:33.047 16:15:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.047 16:15:34 -- host/auth.sh@57 -- # key=4d5e352624d6a6c20fb8c3b4a153f2fee427dc44650181db 00:19:33.047 16:15:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:33.047 16:15:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.44I 00:19:33.047 16:15:34 -- host/auth.sh@59 -- # format_dhchap_key 4d5e352624d6a6c20fb8c3b4a153f2fee427dc44650181db 0 00:19:33.047 16:15:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 4d5e352624d6a6c20fb8c3b4a153f2fee427dc44650181db 0 00:19:33.047 16:15:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # key=4d5e352624d6a6c20fb8c3b4a153f2fee427dc44650181db 00:19:33.047 16:15:34 -- nvmf/common.sh@693 -- # digest=0 00:19:33.047 16:15:34 -- nvmf/common.sh@694 -- # python - 00:19:33.047 16:15:34 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.44I 00:19:33.047 16:15:34 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.44I 00:19:33.047 16:15:34 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.44I 00:19:33.047 16:15:34 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:33.047 16:15:34 -- host/auth.sh@53 -- # local digest len file key 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.047 16:15:34 -- host/auth.sh@54 -- # local -A digests 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # digest=sha256 00:19:33.047 16:15:34 -- host/auth.sh@56 -- # len=32 00:19:33.047 16:15:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.306 16:15:34 -- host/auth.sh@57 -- # key=131cadc97446fa532e04a326bba59c0c 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.zo7 00:19:33.306 16:15:34 -- host/auth.sh@59 -- # format_dhchap_key 131cadc97446fa532e04a326bba59c0c 1 00:19:33.306 16:15:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 131cadc97446fa532e04a326bba59c0c 1 00:19:33.306 16:15:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # key=131cadc97446fa532e04a326bba59c0c 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # digest=1 00:19:33.306 16:15:34 -- nvmf/common.sh@694 -- # python - 00:19:33.306 16:15:34 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.zo7 00:19:33.306 16:15:34 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.zo7 00:19:33.306 16:15:34 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.zo7 00:19:33.306 16:15:34 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:33.306 16:15:34 -- host/auth.sh@53 -- # local digest len file key 00:19:33.306 16:15:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.306 16:15:34 -- host/auth.sh@54 -- # local -A digests 00:19:33.306 16:15:34 -- host/auth.sh@56 -- # digest=sha384 00:19:33.306 16:15:34 -- host/auth.sh@56 -- # len=48 00:19:33.306 16:15:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.306 16:15:34 -- host/auth.sh@57 -- # key=def1743b4ea8bc8e355ff009e220906330fb5b5a8b131e31 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.C0z 00:19:33.306 16:15:34 -- host/auth.sh@59 -- # format_dhchap_key def1743b4ea8bc8e355ff009e220906330fb5b5a8b131e31 2 00:19:33.306 16:15:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 def1743b4ea8bc8e355ff009e220906330fb5b5a8b131e31 2 00:19:33.306 16:15:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # key=def1743b4ea8bc8e355ff009e220906330fb5b5a8b131e31 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # digest=2 00:19:33.306 16:15:34 -- nvmf/common.sh@694 -- # python - 00:19:33.306 16:15:34 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.C0z 00:19:33.306 16:15:34 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.C0z 00:19:33.306 16:15:34 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.C0z 00:19:33.306 16:15:34 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:33.306 16:15:34 -- host/auth.sh@53 -- # local digest len file key 00:19:33.306 16:15:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.306 16:15:34 -- host/auth.sh@54 -- # local -A digests 00:19:33.306 16:15:34 -- host/auth.sh@56 -- # digest=sha512 00:19:33.306 16:15:34 -- host/auth.sh@56 -- # len=64 00:19:33.306 16:15:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.306 16:15:34 -- host/auth.sh@57 -- # key=1168d7ce520de249a41590cf8e9adf3097bb9cfde27adb82a7d7281ac10e7e3f 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.306 16:15:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.JF6 00:19:33.306 16:15:34 -- host/auth.sh@59 -- # format_dhchap_key 1168d7ce520de249a41590cf8e9adf3097bb9cfde27adb82a7d7281ac10e7e3f 3 00:19:33.306 16:15:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 1168d7ce520de249a41590cf8e9adf3097bb9cfde27adb82a7d7281ac10e7e3f 3 00:19:33.306 16:15:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # key=1168d7ce520de249a41590cf8e9adf3097bb9cfde27adb82a7d7281ac10e7e3f 00:19:33.306 16:15:34 -- nvmf/common.sh@693 -- # digest=3 00:19:33.306 16:15:34 -- nvmf/common.sh@694 -- # python - 00:19:33.306 16:15:34 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.JF6 00:19:33.306 16:15:34 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.JF6 00:19:33.306 16:15:34 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.JF6 00:19:33.306 16:15:34 -- host/auth.sh@87 -- # waitforlisten 3460112 00:19:33.306 16:15:34 -- common/autotest_common.sh@817 -- # '[' -z 3460112 ']' 00:19:33.306 16:15:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.306 16:15:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:33.306 16:15:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.306 16:15:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:33.306 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:33.564 16:15:34 -- common/autotest_common.sh@850 -- # return 0 00:19:33.564 16:15:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:33.564 16:15:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EbG 00:19:33.564 16:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.564 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.564 16:15:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:33.564 16:15:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.44I 00:19:33.564 16:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.564 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.564 16:15:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:33.564 16:15:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zo7 00:19:33.564 16:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.564 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.564 16:15:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:33.564 16:15:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.C0z 00:19:33.564 16:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.564 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.564 16:15:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:33.564 16:15:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JF6 00:19:33.564 16:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.564 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 16:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.564 16:15:34 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:33.564 16:15:34 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:33.564 16:15:34 -- nvmf/common.sh@717 -- # local ip 00:19:33.564 16:15:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:33.564 16:15:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:33.564 16:15:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.564 16:15:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.565 16:15:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:33.565 16:15:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.565 16:15:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:33.565 16:15:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:33.565 16:15:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:33.565 16:15:34 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:33.565 16:15:34 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:33.565 16:15:34 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:33.565 16:15:34 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.565 16:15:34 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:33.565 16:15:34 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:33.565 16:15:34 -- nvmf/common.sh@628 -- # local block nvme 00:19:33.565 16:15:34 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:33.565 16:15:34 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:33.565 16:15:34 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:33.565 16:15:34 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:34.941 Waiting for block devices as requested 00:19:34.941 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:34.941 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:34.941 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:34.941 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:34.941 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:35.200 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:35.200 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:35.200 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:35.200 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:19:35.460 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:35.460 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:35.460 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:35.718 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:35.718 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:35.718 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:35.718 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:35.976 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:36.234 16:15:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:36.234 16:15:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:36.234 16:15:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:36.234 16:15:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:36.234 16:15:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:36.234 16:15:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:36.234 16:15:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:36.234 No valid GPT data, bailing 00:19:36.234 16:15:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:36.234 16:15:37 -- scripts/common.sh@391 -- # pt= 00:19:36.234 16:15:37 -- scripts/common.sh@392 -- # return 1 00:19:36.234 16:15:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:36.234 16:15:37 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:36.234 16:15:37 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:36.234 16:15:37 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:36.234 16:15:37 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:36.234 16:15:37 -- nvmf/common.sh@656 -- # echo 1 00:19:36.234 16:15:37 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:36.234 16:15:37 -- nvmf/common.sh@658 -- # echo 1 00:19:36.234 16:15:37 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:36.234 16:15:37 -- nvmf/common.sh@661 -- # echo tcp 00:19:36.234 16:15:37 -- nvmf/common.sh@662 -- # echo 4420 00:19:36.234 16:15:37 -- nvmf/common.sh@663 -- # echo ipv4 00:19:36.234 16:15:37 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:36.234 16:15:37 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:19:36.234 00:19:36.234 Discovery Log Number of Records 2, Generation counter 2 00:19:36.234 =====Discovery Log Entry 0====== 00:19:36.234 trtype: tcp 00:19:36.234 adrfam: ipv4 00:19:36.234 subtype: current discovery subsystem 00:19:36.234 treq: not specified, sq flow control disable supported 00:19:36.234 portid: 1 00:19:36.234 trsvcid: 4420 00:19:36.234 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:36.234 traddr: 10.0.0.1 00:19:36.234 eflags: none 00:19:36.234 sectype: none 00:19:36.234 =====Discovery Log Entry 1====== 00:19:36.234 trtype: tcp 00:19:36.234 adrfam: ipv4 00:19:36.234 subtype: nvme subsystem 00:19:36.234 treq: not specified, sq flow control disable supported 00:19:36.234 portid: 1 00:19:36.234 trsvcid: 4420 00:19:36.234 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:36.234 traddr: 10.0.0.1 00:19:36.234 eflags: none 00:19:36.234 sectype: none 00:19:36.234 16:15:37 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:36.234 16:15:37 -- host/auth.sh@37 -- # echo 0 00:19:36.234 16:15:37 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:36.234 16:15:37 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:36.234 16:15:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.234 16:15:37 -- host/auth.sh@44 -- # digest=sha256 00:19:36.234 16:15:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.234 16:15:37 -- host/auth.sh@44 -- # keyid=1 00:19:36.234 16:15:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:36.234 16:15:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:36.234 16:15:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:36.234 16:15:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:36.234 16:15:37 -- host/auth.sh@100 -- # IFS=, 00:19:36.234 16:15:37 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:36.234 16:15:37 -- host/auth.sh@100 -- # IFS=, 00:19:36.234 16:15:37 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.234 16:15:37 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:36.234 16:15:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.234 16:15:37 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:36.234 16:15:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.234 16:15:37 -- host/auth.sh@68 -- # keyid=1 00:19:36.234 16:15:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.234 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.234 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.234 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.234 16:15:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.234 16:15:37 -- nvmf/common.sh@717 -- # local ip 00:19:36.234 16:15:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.234 16:15:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.234 16:15:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.234 16:15:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.234 16:15:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.234 16:15:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.234 16:15:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.234 16:15:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:36.234 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.234 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 nvme0n1 00:19:36.491 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.491 16:15:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.491 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.491 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 16:15:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.491 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.491 16:15:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.491 16:15:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.491 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.491 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.491 16:15:37 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:36.491 16:15:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.491 16:15:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.491 16:15:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:36.491 16:15:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.491 16:15:37 -- host/auth.sh@44 -- # digest=sha256 00:19:36.491 16:15:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.491 16:15:37 -- host/auth.sh@44 -- # keyid=0 00:19:36.491 16:15:37 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:36.491 16:15:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:36.491 16:15:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:36.491 16:15:37 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:36.491 16:15:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:36.491 16:15:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.491 16:15:37 -- host/auth.sh@68 -- # digest=sha256 00:19:36.491 16:15:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:36.491 16:15:37 -- host/auth.sh@68 -- # keyid=0 00:19:36.491 16:15:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.491 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.491 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.491 16:15:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.491 16:15:37 -- nvmf/common.sh@717 -- # local ip 00:19:36.491 16:15:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.491 16:15:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.491 16:15:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.491 16:15:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.491 16:15:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.491 16:15:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.491 16:15:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.491 16:15:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.491 16:15:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.492 16:15:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:36.492 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.492 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 nvme0n1 00:19:36.749 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.749 16:15:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.749 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.749 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 16:15:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.749 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.749 16:15:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.749 16:15:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.749 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.749 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.749 16:15:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.749 16:15:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:36.749 16:15:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.749 16:15:37 -- host/auth.sh@44 -- # digest=sha256 00:19:36.749 16:15:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.749 16:15:37 -- host/auth.sh@44 -- # keyid=1 00:19:36.749 16:15:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:36.749 16:15:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:36.749 16:15:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:36.749 16:15:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:36.749 16:15:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.749 16:15:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.749 16:15:37 -- host/auth.sh@68 -- # digest=sha256 00:19:36.749 16:15:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:36.749 16:15:37 -- host/auth.sh@68 -- # keyid=1 00:19:36.749 16:15:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.749 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.749 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 16:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.749 16:15:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.749 16:15:37 -- nvmf/common.sh@717 -- # local ip 00:19:36.749 16:15:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.749 16:15:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.749 16:15:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.749 16:15:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.749 16:15:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.749 16:15:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.749 16:15:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.749 16:15:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.749 16:15:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.749 16:15:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:36.749 16:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.749 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 nvme0n1 00:19:36.749 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.749 16:15:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.749 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.749 16:15:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.749 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.007 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.007 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.007 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.007 16:15:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:37.007 16:15:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.007 16:15:38 -- host/auth.sh@44 -- # digest=sha256 00:19:37.007 16:15:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.007 16:15:38 -- host/auth.sh@44 -- # keyid=2 00:19:37.007 16:15:38 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:37.007 16:15:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.007 16:15:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.007 16:15:38 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:37.007 16:15:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:37.007 16:15:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.007 16:15:38 -- host/auth.sh@68 -- # digest=sha256 00:19:37.007 16:15:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.007 16:15:38 -- host/auth.sh@68 -- # keyid=2 00:19:37.007 16:15:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.007 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.007 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.007 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.007 16:15:38 -- nvmf/common.sh@717 -- # local ip 00:19:37.007 16:15:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.007 16:15:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.007 16:15:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.007 16:15:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.007 16:15:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.007 16:15:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.007 16:15:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.007 16:15:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.007 16:15:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.007 16:15:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:37.007 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.007 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.007 nvme0n1 00:19:37.007 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.007 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.007 16:15:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.007 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.007 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.007 16:15:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.007 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.008 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.008 16:15:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.008 16:15:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:37.008 16:15:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.008 16:15:38 -- host/auth.sh@44 -- # digest=sha256 00:19:37.008 16:15:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.008 16:15:38 -- host/auth.sh@44 -- # keyid=3 00:19:37.008 16:15:38 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:37.008 16:15:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.008 16:15:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.008 16:15:38 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:37.008 16:15:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.008 16:15:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.008 16:15:38 -- host/auth.sh@68 -- # digest=sha256 00:19:37.008 16:15:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.008 16:15:38 -- host/auth.sh@68 -- # keyid=3 00:19:37.008 16:15:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.008 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.008 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.008 16:15:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.008 16:15:38 -- nvmf/common.sh@717 -- # local ip 00:19:37.008 16:15:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.008 16:15:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.008 16:15:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.008 16:15:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.008 16:15:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.008 16:15:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.008 16:15:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.008 16:15:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.008 16:15:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.008 16:15:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:37.008 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.008 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.265 nvme0n1 00:19:37.265 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.265 16:15:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.265 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.265 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.265 16:15:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.265 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.265 16:15:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.265 16:15:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.265 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.265 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.265 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.265 16:15:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.265 16:15:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:37.266 16:15:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.266 16:15:38 -- host/auth.sh@44 -- # digest=sha256 00:19:37.266 16:15:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.266 16:15:38 -- host/auth.sh@44 -- # keyid=4 00:19:37.266 16:15:38 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:37.266 16:15:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.266 16:15:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.266 16:15:38 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:37.266 16:15:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:37.266 16:15:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.266 16:15:38 -- host/auth.sh@68 -- # digest=sha256 00:19:37.266 16:15:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.266 16:15:38 -- host/auth.sh@68 -- # keyid=4 00:19:37.266 16:15:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.266 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.266 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.266 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.266 16:15:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.266 16:15:38 -- nvmf/common.sh@717 -- # local ip 00:19:37.266 16:15:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.266 16:15:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.266 16:15:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.266 16:15:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.266 16:15:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.266 16:15:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.266 16:15:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.266 16:15:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.266 16:15:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.266 16:15:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.266 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.266 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 nvme0n1 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.551 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.551 16:15:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.551 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.551 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.551 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.551 16:15:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.551 16:15:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:37.551 16:15:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.551 16:15:38 -- host/auth.sh@44 -- # digest=sha256 00:19:37.551 16:15:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.551 16:15:38 -- host/auth.sh@44 -- # keyid=0 00:19:37.551 16:15:38 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:37.551 16:15:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.551 16:15:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:37.551 16:15:38 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:37.551 16:15:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:37.551 16:15:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.551 16:15:38 -- host/auth.sh@68 -- # digest=sha256 00:19:37.551 16:15:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:37.551 16:15:38 -- host/auth.sh@68 -- # keyid=0 00:19:37.551 16:15:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.551 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.551 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.551 16:15:38 -- nvmf/common.sh@717 -- # local ip 00:19:37.551 16:15:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.551 16:15:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.551 16:15:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.551 16:15:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.551 16:15:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.551 16:15:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.551 16:15:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.551 16:15:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.551 16:15:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.551 16:15:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:37.551 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.551 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 nvme0n1 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.551 16:15:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.551 16:15:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.551 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.551 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.832 16:15:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.832 16:15:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.832 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.832 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.832 16:15:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.832 16:15:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:37.832 16:15:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.832 16:15:38 -- host/auth.sh@44 -- # digest=sha256 00:19:37.832 16:15:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.832 16:15:38 -- host/auth.sh@44 -- # keyid=1 00:19:37.832 16:15:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:37.832 16:15:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.832 16:15:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:37.832 16:15:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:37.832 16:15:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:37.832 16:15:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.832 16:15:38 -- host/auth.sh@68 -- # digest=sha256 00:19:37.832 16:15:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:37.832 16:15:38 -- host/auth.sh@68 -- # keyid=1 00:19:37.832 16:15:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.832 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.832 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.832 16:15:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.832 16:15:38 -- nvmf/common.sh@717 -- # local ip 00:19:37.832 16:15:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.832 16:15:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.832 16:15:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.832 16:15:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.832 16:15:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.832 16:15:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.832 16:15:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.832 16:15:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.832 16:15:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.832 16:15:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:37.833 16:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.833 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 nvme0n1 00:19:37.833 16:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.833 16:15:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.833 16:15:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.833 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.833 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.833 16:15:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.833 16:15:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.833 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.833 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.833 16:15:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.833 16:15:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:37.833 16:15:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.833 16:15:39 -- host/auth.sh@44 -- # digest=sha256 00:19:37.833 16:15:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.833 16:15:39 -- host/auth.sh@44 -- # keyid=2 00:19:37.833 16:15:39 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:37.833 16:15:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:37.833 16:15:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:37.833 16:15:39 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:37.833 16:15:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:37.833 16:15:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.833 16:15:39 -- host/auth.sh@68 -- # digest=sha256 00:19:37.833 16:15:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:37.833 16:15:39 -- host/auth.sh@68 -- # keyid=2 00:19:37.833 16:15:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.833 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.833 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.833 16:15:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.833 16:15:39 -- nvmf/common.sh@717 -- # local ip 00:19:37.833 16:15:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.833 16:15:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.833 16:15:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.833 16:15:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.833 16:15:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.833 16:15:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.833 16:15:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.833 16:15:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.833 16:15:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.833 16:15:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:37.833 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.833 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.091 nvme0n1 00:19:38.091 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.091 16:15:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.091 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.091 16:15:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.091 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.091 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.091 16:15:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.091 16:15:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.091 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.091 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.091 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.091 16:15:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.091 16:15:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:38.091 16:15:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.091 16:15:39 -- host/auth.sh@44 -- # digest=sha256 00:19:38.091 16:15:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.091 16:15:39 -- host/auth.sh@44 -- # keyid=3 00:19:38.091 16:15:39 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:38.091 16:15:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:38.091 16:15:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:38.091 16:15:39 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:38.091 16:15:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.091 16:15:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.091 16:15:39 -- host/auth.sh@68 -- # digest=sha256 00:19:38.091 16:15:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:38.091 16:15:39 -- host/auth.sh@68 -- # keyid=3 00:19:38.091 16:15:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.091 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.091 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.091 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.091 16:15:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.091 16:15:39 -- nvmf/common.sh@717 -- # local ip 00:19:38.091 16:15:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.091 16:15:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.091 16:15:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.091 16:15:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.091 16:15:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.091 16:15:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.091 16:15:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.091 16:15:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.091 16:15:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.091 16:15:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:38.091 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.091 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.349 nvme0n1 00:19:38.349 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.349 16:15:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.349 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.349 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.349 16:15:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.349 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.349 16:15:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.349 16:15:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.349 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.349 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.349 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.349 16:15:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.349 16:15:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:38.349 16:15:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.349 16:15:39 -- host/auth.sh@44 -- # digest=sha256 00:19:38.349 16:15:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.349 16:15:39 -- host/auth.sh@44 -- # keyid=4 00:19:38.349 16:15:39 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:38.349 16:15:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:38.349 16:15:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:38.349 16:15:39 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:38.349 16:15:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:38.349 16:15:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.349 16:15:39 -- host/auth.sh@68 -- # digest=sha256 00:19:38.349 16:15:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:38.349 16:15:39 -- host/auth.sh@68 -- # keyid=4 00:19:38.349 16:15:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.349 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.349 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.349 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.349 16:15:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.349 16:15:39 -- nvmf/common.sh@717 -- # local ip 00:19:38.349 16:15:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.349 16:15:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.349 16:15:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.349 16:15:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.349 16:15:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.349 16:15:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.349 16:15:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.349 16:15:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.349 16:15:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.349 16:15:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.349 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.349 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.607 nvme0n1 00:19:38.607 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.607 16:15:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.607 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.607 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.607 16:15:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.607 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.607 16:15:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.607 16:15:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.607 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.607 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.607 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.607 16:15:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.607 16:15:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.607 16:15:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:38.607 16:15:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.607 16:15:39 -- host/auth.sh@44 -- # digest=sha256 00:19:38.607 16:15:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.607 16:15:39 -- host/auth.sh@44 -- # keyid=0 00:19:38.607 16:15:39 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:38.607 16:15:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:38.607 16:15:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:38.607 16:15:39 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:38.607 16:15:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:38.607 16:15:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.607 16:15:39 -- host/auth.sh@68 -- # digest=sha256 00:19:38.607 16:15:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:38.607 16:15:39 -- host/auth.sh@68 -- # keyid=0 00:19:38.607 16:15:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.607 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.607 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.607 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.607 16:15:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.607 16:15:39 -- nvmf/common.sh@717 -- # local ip 00:19:38.607 16:15:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.607 16:15:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.607 16:15:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.607 16:15:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.607 16:15:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.607 16:15:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.607 16:15:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.607 16:15:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.607 16:15:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.607 16:15:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:38.607 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.607 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.865 nvme0n1 00:19:38.865 16:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.865 16:15:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.865 16:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.865 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.865 16:15:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.865 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.865 16:15:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.865 16:15:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.865 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.865 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.865 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.865 16:15:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.865 16:15:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:38.865 16:15:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.865 16:15:40 -- host/auth.sh@44 -- # digest=sha256 00:19:38.865 16:15:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.865 16:15:40 -- host/auth.sh@44 -- # keyid=1 00:19:38.865 16:15:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:38.865 16:15:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:38.865 16:15:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:38.865 16:15:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:38.865 16:15:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:38.865 16:15:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.865 16:15:40 -- host/auth.sh@68 -- # digest=sha256 00:19:38.865 16:15:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:38.865 16:15:40 -- host/auth.sh@68 -- # keyid=1 00:19:38.865 16:15:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.865 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.865 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.865 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.865 16:15:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.865 16:15:40 -- nvmf/common.sh@717 -- # local ip 00:19:38.865 16:15:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.865 16:15:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.865 16:15:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.865 16:15:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.865 16:15:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.865 16:15:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.865 16:15:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.865 16:15:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.865 16:15:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.865 16:15:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:38.865 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.865 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.123 nvme0n1 00:19:39.123 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.123 16:15:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.123 16:15:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.123 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.123 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.123 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.123 16:15:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.123 16:15:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.123 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.123 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.123 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.123 16:15:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.123 16:15:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:39.123 16:15:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.123 16:15:40 -- host/auth.sh@44 -- # digest=sha256 00:19:39.123 16:15:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.123 16:15:40 -- host/auth.sh@44 -- # keyid=2 00:19:39.123 16:15:40 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:39.123 16:15:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:39.123 16:15:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:39.123 16:15:40 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:39.123 16:15:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:19:39.123 16:15:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.123 16:15:40 -- host/auth.sh@68 -- # digest=sha256 00:19:39.123 16:15:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:39.123 16:15:40 -- host/auth.sh@68 -- # keyid=2 00:19:39.123 16:15:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.123 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.123 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.123 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.123 16:15:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.123 16:15:40 -- nvmf/common.sh@717 -- # local ip 00:19:39.123 16:15:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.123 16:15:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.123 16:15:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.123 16:15:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.123 16:15:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.123 16:15:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.123 16:15:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.123 16:15:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.123 16:15:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.123 16:15:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:39.123 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.123 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.381 nvme0n1 00:19:39.381 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.381 16:15:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.381 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.381 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.381 16:15:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.381 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.638 16:15:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.638 16:15:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.638 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.638 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.638 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.638 16:15:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.638 16:15:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:39.638 16:15:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.638 16:15:40 -- host/auth.sh@44 -- # digest=sha256 00:19:39.638 16:15:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.638 16:15:40 -- host/auth.sh@44 -- # keyid=3 00:19:39.638 16:15:40 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:39.638 16:15:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:39.638 16:15:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:39.638 16:15:40 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:39.638 16:15:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:19:39.638 16:15:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.638 16:15:40 -- host/auth.sh@68 -- # digest=sha256 00:19:39.638 16:15:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:39.638 16:15:40 -- host/auth.sh@68 -- # keyid=3 00:19:39.638 16:15:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.638 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.638 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.638 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.638 16:15:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.638 16:15:40 -- nvmf/common.sh@717 -- # local ip 00:19:39.638 16:15:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.638 16:15:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.638 16:15:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.638 16:15:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.638 16:15:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.638 16:15:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.638 16:15:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.638 16:15:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.638 16:15:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.638 16:15:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:39.638 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.638 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.897 nvme0n1 00:19:39.897 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.897 16:15:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.897 16:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.897 16:15:40 -- common/autotest_common.sh@10 -- # set +x 00:19:39.897 16:15:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.897 16:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.897 16:15:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.897 16:15:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.897 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.897 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.897 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.897 16:15:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.897 16:15:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:39.897 16:15:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.897 16:15:41 -- host/auth.sh@44 -- # digest=sha256 00:19:39.897 16:15:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.897 16:15:41 -- host/auth.sh@44 -- # keyid=4 00:19:39.897 16:15:41 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:39.897 16:15:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:39.897 16:15:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:39.897 16:15:41 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:39.897 16:15:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:19:39.897 16:15:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.897 16:15:41 -- host/auth.sh@68 -- # digest=sha256 00:19:39.897 16:15:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:39.897 16:15:41 -- host/auth.sh@68 -- # keyid=4 00:19:39.897 16:15:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.897 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.897 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.897 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.897 16:15:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.897 16:15:41 -- nvmf/common.sh@717 -- # local ip 00:19:39.897 16:15:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.897 16:15:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.897 16:15:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.897 16:15:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.897 16:15:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.897 16:15:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.897 16:15:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.897 16:15:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.897 16:15:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.897 16:15:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.897 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.897 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.154 nvme0n1 00:19:40.154 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.154 16:15:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.155 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.155 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.155 16:15:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.155 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.155 16:15:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.155 16:15:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.155 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.155 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.155 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.155 16:15:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.155 16:15:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.155 16:15:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:40.155 16:15:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.155 16:15:41 -- host/auth.sh@44 -- # digest=sha256 00:19:40.155 16:15:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.155 16:15:41 -- host/auth.sh@44 -- # keyid=0 00:19:40.155 16:15:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:40.155 16:15:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:40.155 16:15:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.155 16:15:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:40.155 16:15:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:19:40.155 16:15:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.155 16:15:41 -- host/auth.sh@68 -- # digest=sha256 00:19:40.155 16:15:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.155 16:15:41 -- host/auth.sh@68 -- # keyid=0 00:19:40.155 16:15:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.155 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.155 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.155 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.155 16:15:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.155 16:15:41 -- nvmf/common.sh@717 -- # local ip 00:19:40.155 16:15:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.155 16:15:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.155 16:15:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.155 16:15:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.155 16:15:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.155 16:15:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.155 16:15:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.155 16:15:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.155 16:15:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.155 16:15:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:40.155 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.155 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.720 nvme0n1 00:19:40.720 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.720 16:15:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.720 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.720 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.720 16:15:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.720 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.720 16:15:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.720 16:15:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.720 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.720 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.720 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.720 16:15:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.720 16:15:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:40.720 16:15:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.720 16:15:41 -- host/auth.sh@44 -- # digest=sha256 00:19:40.720 16:15:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.720 16:15:41 -- host/auth.sh@44 -- # keyid=1 00:19:40.720 16:15:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:40.720 16:15:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:40.720 16:15:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.720 16:15:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:40.720 16:15:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:19:40.720 16:15:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.720 16:15:41 -- host/auth.sh@68 -- # digest=sha256 00:19:40.720 16:15:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.720 16:15:41 -- host/auth.sh@68 -- # keyid=1 00:19:40.720 16:15:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.720 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.720 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.720 16:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.720 16:15:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.720 16:15:41 -- nvmf/common.sh@717 -- # local ip 00:19:40.720 16:15:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.720 16:15:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.720 16:15:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.720 16:15:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.720 16:15:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.720 16:15:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.720 16:15:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.720 16:15:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.720 16:15:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.720 16:15:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:40.720 16:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.720 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 nvme0n1 00:19:41.287 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.287 16:15:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.287 16:15:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.287 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.287 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.287 16:15:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.287 16:15:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.287 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.287 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.287 16:15:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.287 16:15:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:41.287 16:15:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.287 16:15:42 -- host/auth.sh@44 -- # digest=sha256 00:19:41.287 16:15:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.287 16:15:42 -- host/auth.sh@44 -- # keyid=2 00:19:41.287 16:15:42 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:41.287 16:15:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:41.287 16:15:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:41.287 16:15:42 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:41.287 16:15:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:19:41.287 16:15:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.287 16:15:42 -- host/auth.sh@68 -- # digest=sha256 00:19:41.287 16:15:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:41.287 16:15:42 -- host/auth.sh@68 -- # keyid=2 00:19:41.287 16:15:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.287 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.287 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.287 16:15:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.287 16:15:42 -- nvmf/common.sh@717 -- # local ip 00:19:41.287 16:15:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.287 16:15:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.287 16:15:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.287 16:15:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.287 16:15:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:41.287 16:15:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.287 16:15:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:41.287 16:15:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:41.287 16:15:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:41.287 16:15:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:41.287 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.287 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.852 nvme0n1 00:19:41.852 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.852 16:15:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.852 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.852 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.852 16:15:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.852 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.852 16:15:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.852 16:15:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.852 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.852 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.852 16:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.852 16:15:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.852 16:15:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:41.852 16:15:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.852 16:15:42 -- host/auth.sh@44 -- # digest=sha256 00:19:41.852 16:15:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.852 16:15:42 -- host/auth.sh@44 -- # keyid=3 00:19:41.852 16:15:42 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:41.852 16:15:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:41.852 16:15:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:41.852 16:15:42 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:41.852 16:15:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:19:41.852 16:15:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.852 16:15:42 -- host/auth.sh@68 -- # digest=sha256 00:19:41.852 16:15:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:41.852 16:15:42 -- host/auth.sh@68 -- # keyid=3 00:19:41.852 16:15:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.852 16:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.852 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:41.852 16:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.852 16:15:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.852 16:15:43 -- nvmf/common.sh@717 -- # local ip 00:19:41.852 16:15:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.852 16:15:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.852 16:15:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.852 16:15:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.852 16:15:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:41.852 16:15:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.852 16:15:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:41.852 16:15:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:41.852 16:15:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:41.852 16:15:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:41.852 16:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.852 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.417 nvme0n1 00:19:42.417 16:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.417 16:15:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.417 16:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.417 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.417 16:15:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.417 16:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.417 16:15:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.417 16:15:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.417 16:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.417 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.417 16:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.417 16:15:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.417 16:15:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:42.417 16:15:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.417 16:15:43 -- host/auth.sh@44 -- # digest=sha256 00:19:42.417 16:15:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.417 16:15:43 -- host/auth.sh@44 -- # keyid=4 00:19:42.417 16:15:43 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:42.417 16:15:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:42.417 16:15:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:42.417 16:15:43 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:42.417 16:15:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:19:42.417 16:15:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.417 16:15:43 -- host/auth.sh@68 -- # digest=sha256 00:19:42.417 16:15:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:42.417 16:15:43 -- host/auth.sh@68 -- # keyid=4 00:19:42.417 16:15:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.417 16:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.417 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.417 16:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.417 16:15:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.417 16:15:43 -- nvmf/common.sh@717 -- # local ip 00:19:42.417 16:15:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.417 16:15:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.417 16:15:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.417 16:15:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.417 16:15:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:42.417 16:15:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.417 16:15:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:42.417 16:15:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:42.417 16:15:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:42.417 16:15:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.417 16:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.417 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 nvme0n1 00:19:42.982 16:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.982 16:15:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.982 16:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.982 16:15:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.982 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 16:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.982 16:15:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.982 16:15:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.982 16:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.982 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 16:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.982 16:15:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.982 16:15:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.982 16:15:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:42.982 16:15:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.982 16:15:44 -- host/auth.sh@44 -- # digest=sha256 00:19:42.982 16:15:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.982 16:15:44 -- host/auth.sh@44 -- # keyid=0 00:19:42.982 16:15:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:42.982 16:15:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:42.982 16:15:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:42.982 16:15:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:42.982 16:15:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:42.982 16:15:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.982 16:15:44 -- host/auth.sh@68 -- # digest=sha256 00:19:42.982 16:15:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:42.982 16:15:44 -- host/auth.sh@68 -- # keyid=0 00:19:42.982 16:15:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.982 16:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.982 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 16:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.982 16:15:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.982 16:15:44 -- nvmf/common.sh@717 -- # local ip 00:19:42.982 16:15:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.982 16:15:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.982 16:15:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.982 16:15:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.982 16:15:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:42.982 16:15:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.982 16:15:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:42.982 16:15:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:42.982 16:15:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:42.982 16:15:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:42.982 16:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.982 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 nvme0n1 00:19:43.914 16:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.914 16:15:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.914 16:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.914 16:15:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:43.914 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 16:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.914 16:15:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.914 16:15:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.914 16:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.914 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 16:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.914 16:15:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:43.914 16:15:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:43.914 16:15:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:43.914 16:15:45 -- host/auth.sh@44 -- # digest=sha256 00:19:43.914 16:15:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.914 16:15:45 -- host/auth.sh@44 -- # keyid=1 00:19:43.914 16:15:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:43.914 16:15:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:43.914 16:15:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:43.914 16:15:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:43.914 16:15:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:43.914 16:15:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:43.914 16:15:45 -- host/auth.sh@68 -- # digest=sha256 00:19:43.914 16:15:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:43.914 16:15:45 -- host/auth.sh@68 -- # keyid=1 00:19:43.914 16:15:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.914 16:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.914 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 16:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.914 16:15:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:43.914 16:15:45 -- nvmf/common.sh@717 -- # local ip 00:19:43.914 16:15:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:43.914 16:15:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:43.914 16:15:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.914 16:15:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.914 16:15:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:43.914 16:15:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.914 16:15:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:43.914 16:15:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:43.914 16:15:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:43.914 16:15:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:43.914 16:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.914 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 nvme0n1 00:19:44.844 16:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.844 16:15:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.844 16:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.844 16:15:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:44.844 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 16:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.844 16:15:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.844 16:15:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.844 16:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.844 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 16:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.844 16:15:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:44.844 16:15:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:44.844 16:15:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:44.844 16:15:46 -- host/auth.sh@44 -- # digest=sha256 00:19:44.844 16:15:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:44.844 16:15:46 -- host/auth.sh@44 -- # keyid=2 00:19:44.844 16:15:46 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:44.844 16:15:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:44.844 16:15:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:44.844 16:15:46 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:44.844 16:15:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:44.844 16:15:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:44.844 16:15:46 -- host/auth.sh@68 -- # digest=sha256 00:19:44.844 16:15:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:44.844 16:15:46 -- host/auth.sh@68 -- # keyid=2 00:19:44.844 16:15:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.844 16:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.844 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 16:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.844 16:15:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:44.844 16:15:46 -- nvmf/common.sh@717 -- # local ip 00:19:44.844 16:15:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:44.844 16:15:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:44.844 16:15:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.844 16:15:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.844 16:15:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:44.845 16:15:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.845 16:15:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:44.845 16:15:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:44.845 16:15:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:44.845 16:15:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.845 16:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.845 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:19:45.776 nvme0n1 00:19:45.776 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.776 16:15:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.776 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.776 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.776 16:15:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:45.776 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.033 16:15:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.033 16:15:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.033 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.033 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.033 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.033 16:15:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:46.033 16:15:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:46.033 16:15:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:46.033 16:15:47 -- host/auth.sh@44 -- # digest=sha256 00:19:46.033 16:15:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.033 16:15:47 -- host/auth.sh@44 -- # keyid=3 00:19:46.033 16:15:47 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:46.033 16:15:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:46.033 16:15:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:46.033 16:15:47 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:46.033 16:15:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:46.033 16:15:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:46.033 16:15:47 -- host/auth.sh@68 -- # digest=sha256 00:19:46.033 16:15:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:46.033 16:15:47 -- host/auth.sh@68 -- # keyid=3 00:19:46.033 16:15:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.033 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.033 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.033 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.033 16:15:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:46.033 16:15:47 -- nvmf/common.sh@717 -- # local ip 00:19:46.033 16:15:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.033 16:15:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.033 16:15:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.033 16:15:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.033 16:15:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:46.033 16:15:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.033 16:15:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:46.033 16:15:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:46.033 16:15:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:46.033 16:15:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:46.033 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.033 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.966 nvme0n1 00:19:46.966 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.966 16:15:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.966 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.966 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.966 16:15:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:46.966 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.966 16:15:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.966 16:15:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.966 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.966 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.966 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.966 16:15:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:46.966 16:15:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:46.966 16:15:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:46.966 16:15:47 -- host/auth.sh@44 -- # digest=sha256 00:19:46.966 16:15:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.966 16:15:47 -- host/auth.sh@44 -- # keyid=4 00:19:46.966 16:15:47 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:46.966 16:15:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:46.966 16:15:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:46.966 16:15:47 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:46.966 16:15:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:46.966 16:15:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:46.966 16:15:47 -- host/auth.sh@68 -- # digest=sha256 00:19:46.966 16:15:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:46.966 16:15:47 -- host/auth.sh@68 -- # keyid=4 00:19:46.966 16:15:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.966 16:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.966 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.966 16:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.966 16:15:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:46.966 16:15:47 -- nvmf/common.sh@717 -- # local ip 00:19:46.966 16:15:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.966 16:15:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.966 16:15:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.966 16:15:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.966 16:15:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:46.966 16:15:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.966 16:15:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:46.966 16:15:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:46.966 16:15:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:46.966 16:15:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.966 16:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.966 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 nvme0n1 00:19:47.900 16:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.900 16:15:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:47.900 16:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.900 16:15:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.900 16:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:48 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:47.900 16:15:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.900 16:15:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:47.900 16:15:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:47.900 16:15:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:47.900 16:15:48 -- host/auth.sh@44 -- # digest=sha384 00:19:47.900 16:15:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.900 16:15:48 -- host/auth.sh@44 -- # keyid=0 00:19:47.900 16:15:48 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:47.900 16:15:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:47.900 16:15:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:47.900 16:15:48 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:47.900 16:15:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:47.900 16:15:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:47.900 16:15:48 -- host/auth.sh@68 -- # digest=sha384 00:19:47.900 16:15:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:47.900 16:15:48 -- host/auth.sh@68 -- # keyid=0 00:19:47.900 16:15:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.900 16:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:47.900 16:15:48 -- nvmf/common.sh@717 -- # local ip 00:19:47.900 16:15:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:47.900 16:15:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:47.900 16:15:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.900 16:15:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.900 16:15:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:47.900 16:15:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.900 16:15:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:47.900 16:15:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:47.900 16:15:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:47.900 16:15:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:47.900 16:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 nvme0n1 00:19:47.900 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.900 16:15:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:47.900 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.900 16:15:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.900 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:47.900 16:15:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:47.900 16:15:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:47.900 16:15:49 -- host/auth.sh@44 -- # digest=sha384 00:19:47.900 16:15:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.900 16:15:49 -- host/auth.sh@44 -- # keyid=1 00:19:47.900 16:15:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:47.900 16:15:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:47.900 16:15:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:47.900 16:15:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:47.900 16:15:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:47.900 16:15:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:47.900 16:15:49 -- host/auth.sh@68 -- # digest=sha384 00:19:47.900 16:15:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:47.900 16:15:49 -- host/auth.sh@68 -- # keyid=1 00:19:47.900 16:15:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.900 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.900 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.900 16:15:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:47.900 16:15:49 -- nvmf/common.sh@717 -- # local ip 00:19:47.900 16:15:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:47.900 16:15:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:47.900 16:15:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.900 16:15:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.900 16:15:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:47.900 16:15:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.900 16:15:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:47.900 16:15:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:47.900 16:15:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:47.900 16:15:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:47.900 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.900 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.158 nvme0n1 00:19:48.158 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.158 16:15:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.158 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.158 16:15:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.158 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.158 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.158 16:15:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.158 16:15:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.158 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.158 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.158 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.158 16:15:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.158 16:15:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:48.158 16:15:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.159 16:15:49 -- host/auth.sh@44 -- # digest=sha384 00:19:48.159 16:15:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.159 16:15:49 -- host/auth.sh@44 -- # keyid=2 00:19:48.159 16:15:49 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:48.159 16:15:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.159 16:15:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:48.159 16:15:49 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:48.159 16:15:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:48.159 16:15:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.159 16:15:49 -- host/auth.sh@68 -- # digest=sha384 00:19:48.159 16:15:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:48.159 16:15:49 -- host/auth.sh@68 -- # keyid=2 00:19:48.159 16:15:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.159 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.159 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.159 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.159 16:15:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.159 16:15:49 -- nvmf/common.sh@717 -- # local ip 00:19:48.159 16:15:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.159 16:15:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.159 16:15:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.159 16:15:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.159 16:15:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:48.159 16:15:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.159 16:15:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:48.159 16:15:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:48.159 16:15:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:48.159 16:15:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:48.159 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.159 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 nvme0n1 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.418 16:15:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:48.418 16:15:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # digest=sha384 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # keyid=3 00:19:48.418 16:15:49 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:48.418 16:15:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.418 16:15:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:48.418 16:15:49 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:48.418 16:15:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:48.418 16:15:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.418 16:15:49 -- host/auth.sh@68 -- # digest=sha384 00:19:48.418 16:15:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:48.418 16:15:49 -- host/auth.sh@68 -- # keyid=3 00:19:48.418 16:15:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.418 16:15:49 -- nvmf/common.sh@717 -- # local ip 00:19:48.418 16:15:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.418 16:15:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.418 16:15:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.418 16:15:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.418 16:15:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:48.418 16:15:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.418 16:15:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:48.418 16:15:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:48.418 16:15:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:48.418 16:15:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 nvme0n1 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.418 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.418 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.418 16:15:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.418 16:15:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:48.418 16:15:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # digest=sha384 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.418 16:15:49 -- host/auth.sh@44 -- # keyid=4 00:19:48.418 16:15:49 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:48.418 16:15:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.418 16:15:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:48.419 16:15:49 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:48.419 16:15:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:48.419 16:15:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.419 16:15:49 -- host/auth.sh@68 -- # digest=sha384 00:19:48.419 16:15:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:48.419 16:15:49 -- host/auth.sh@68 -- # keyid=4 00:19:48.419 16:15:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.419 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.419 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.419 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.419 16:15:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.419 16:15:49 -- nvmf/common.sh@717 -- # local ip 00:19:48.419 16:15:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.419 16:15:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.419 16:15:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.419 16:15:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.419 16:15:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:48.419 16:15:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.419 16:15:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:48.419 16:15:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:48.419 16:15:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:48.419 16:15:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.419 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.419 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.676 nvme0n1 00:19:48.676 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.676 16:15:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.676 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.676 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.676 16:15:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.676 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.676 16:15:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.676 16:15:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.676 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.676 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.676 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.676 16:15:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.676 16:15:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.676 16:15:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:48.676 16:15:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.676 16:15:49 -- host/auth.sh@44 -- # digest=sha384 00:19:48.676 16:15:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.676 16:15:49 -- host/auth.sh@44 -- # keyid=0 00:19:48.676 16:15:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:48.676 16:15:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.676 16:15:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:48.676 16:15:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:48.676 16:15:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:48.676 16:15:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.676 16:15:49 -- host/auth.sh@68 -- # digest=sha384 00:19:48.676 16:15:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:48.676 16:15:49 -- host/auth.sh@68 -- # keyid=0 00:19:48.676 16:15:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.676 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.676 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.676 16:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.676 16:15:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.676 16:15:49 -- nvmf/common.sh@717 -- # local ip 00:19:48.676 16:15:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.676 16:15:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.676 16:15:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.676 16:15:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.676 16:15:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:48.676 16:15:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.676 16:15:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:48.676 16:15:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:48.676 16:15:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:48.676 16:15:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:48.676 16:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.676 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.934 nvme0n1 00:19:48.934 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.934 16:15:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.934 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.934 16:15:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.934 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.934 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.934 16:15:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.934 16:15:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.934 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.934 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.934 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.934 16:15:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.934 16:15:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:48.934 16:15:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.934 16:15:50 -- host/auth.sh@44 -- # digest=sha384 00:19:48.934 16:15:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.934 16:15:50 -- host/auth.sh@44 -- # keyid=1 00:19:48.934 16:15:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:48.934 16:15:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.934 16:15:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:48.934 16:15:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:48.934 16:15:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:48.934 16:15:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.934 16:15:50 -- host/auth.sh@68 -- # digest=sha384 00:19:48.934 16:15:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:48.934 16:15:50 -- host/auth.sh@68 -- # keyid=1 00:19:48.934 16:15:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.934 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.934 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.934 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.934 16:15:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.934 16:15:50 -- nvmf/common.sh@717 -- # local ip 00:19:48.934 16:15:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.934 16:15:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.934 16:15:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.934 16:15:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.934 16:15:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:48.934 16:15:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.934 16:15:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:48.934 16:15:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:48.934 16:15:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:48.934 16:15:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:48.934 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.934 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.192 nvme0n1 00:19:49.192 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.192 16:15:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.192 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.192 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.192 16:15:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.192 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.192 16:15:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.192 16:15:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.192 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.192 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.192 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.192 16:15:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:49.192 16:15:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:49.192 16:15:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:49.192 16:15:50 -- host/auth.sh@44 -- # digest=sha384 00:19:49.192 16:15:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.192 16:15:50 -- host/auth.sh@44 -- # keyid=2 00:19:49.192 16:15:50 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:49.192 16:15:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:49.192 16:15:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:49.192 16:15:50 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:49.192 16:15:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:49.192 16:15:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:49.192 16:15:50 -- host/auth.sh@68 -- # digest=sha384 00:19:49.192 16:15:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:49.192 16:15:50 -- host/auth.sh@68 -- # keyid=2 00:19:49.192 16:15:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.192 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.192 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.192 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.192 16:15:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:49.192 16:15:50 -- nvmf/common.sh@717 -- # local ip 00:19:49.192 16:15:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:49.192 16:15:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:49.192 16:15:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.192 16:15:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.192 16:15:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:49.192 16:15:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.192 16:15:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:49.192 16:15:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:49.192 16:15:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:49.192 16:15:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:49.192 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.192 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.450 nvme0n1 00:19:49.450 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.450 16:15:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.450 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.450 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.450 16:15:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.450 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.450 16:15:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.450 16:15:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.450 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.450 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.450 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.450 16:15:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:49.450 16:15:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:49.450 16:15:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:49.450 16:15:50 -- host/auth.sh@44 -- # digest=sha384 00:19:49.450 16:15:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.450 16:15:50 -- host/auth.sh@44 -- # keyid=3 00:19:49.450 16:15:50 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:49.450 16:15:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:49.450 16:15:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:49.450 16:15:50 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:49.450 16:15:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:49.451 16:15:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:49.451 16:15:50 -- host/auth.sh@68 -- # digest=sha384 00:19:49.451 16:15:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:49.451 16:15:50 -- host/auth.sh@68 -- # keyid=3 00:19:49.451 16:15:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.451 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.451 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.451 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.451 16:15:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:49.451 16:15:50 -- nvmf/common.sh@717 -- # local ip 00:19:49.451 16:15:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:49.451 16:15:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:49.451 16:15:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.451 16:15:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.451 16:15:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:49.451 16:15:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.451 16:15:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:49.451 16:15:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:49.451 16:15:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:49.451 16:15:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:49.451 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.451 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.451 nvme0n1 00:19:49.451 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.451 16:15:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.451 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.451 16:15:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.451 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.451 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:49.709 16:15:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:49.709 16:15:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # digest=sha384 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # keyid=4 00:19:49.709 16:15:50 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:49.709 16:15:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:49.709 16:15:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:49.709 16:15:50 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:49.709 16:15:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:49.709 16:15:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # digest=sha384 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # keyid=4 00:19:49.709 16:15:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:49.709 16:15:50 -- nvmf/common.sh@717 -- # local ip 00:19:49.709 16:15:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:49.709 16:15:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:49.709 16:15:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:49.709 16:15:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 nvme0n1 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.709 16:15:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.709 16:15:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:49.709 16:15:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:49.709 16:15:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # digest=sha384 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:49.709 16:15:50 -- host/auth.sh@44 -- # keyid=0 00:19:49.709 16:15:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:49.709 16:15:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:49.709 16:15:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:49.709 16:15:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:49.709 16:15:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:49.709 16:15:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # digest=sha384 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:49.709 16:15:50 -- host/auth.sh@68 -- # keyid=0 00:19:49.709 16:15:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 16:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.709 16:15:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:49.709 16:15:50 -- nvmf/common.sh@717 -- # local ip 00:19:49.709 16:15:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:49.709 16:15:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:49.709 16:15:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:49.709 16:15:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:49.709 16:15:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:49.709 16:15:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:49.709 16:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.709 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.968 nvme0n1 00:19:49.968 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.968 16:15:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.968 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.968 16:15:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.968 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:49.968 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.233 16:15:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.233 16:15:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.233 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.233 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.233 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.233 16:15:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:50.233 16:15:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:50.233 16:15:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:50.233 16:15:51 -- host/auth.sh@44 -- # digest=sha384 00:19:50.233 16:15:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.233 16:15:51 -- host/auth.sh@44 -- # keyid=1 00:19:50.233 16:15:51 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:50.233 16:15:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:50.233 16:15:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:50.233 16:15:51 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:50.233 16:15:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:50.233 16:15:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:50.233 16:15:51 -- host/auth.sh@68 -- # digest=sha384 00:19:50.233 16:15:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:50.233 16:15:51 -- host/auth.sh@68 -- # keyid=1 00:19:50.233 16:15:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.233 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.233 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.233 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.233 16:15:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:50.233 16:15:51 -- nvmf/common.sh@717 -- # local ip 00:19:50.233 16:15:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:50.233 16:15:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:50.233 16:15:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.233 16:15:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.233 16:15:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:50.233 16:15:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.233 16:15:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:50.233 16:15:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:50.233 16:15:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:50.233 16:15:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:50.233 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.233 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.491 nvme0n1 00:19:50.491 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.491 16:15:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.491 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.491 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.491 16:15:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:50.491 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.491 16:15:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.491 16:15:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.491 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.492 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.492 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.492 16:15:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:50.492 16:15:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:50.492 16:15:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:50.492 16:15:51 -- host/auth.sh@44 -- # digest=sha384 00:19:50.492 16:15:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.492 16:15:51 -- host/auth.sh@44 -- # keyid=2 00:19:50.492 16:15:51 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:50.492 16:15:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:50.492 16:15:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:50.492 16:15:51 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:50.492 16:15:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:50.492 16:15:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:50.492 16:15:51 -- host/auth.sh@68 -- # digest=sha384 00:19:50.492 16:15:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:50.492 16:15:51 -- host/auth.sh@68 -- # keyid=2 00:19:50.492 16:15:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.492 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.492 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.492 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.492 16:15:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:50.492 16:15:51 -- nvmf/common.sh@717 -- # local ip 00:19:50.492 16:15:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:50.492 16:15:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:50.492 16:15:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.492 16:15:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.492 16:15:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:50.492 16:15:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.492 16:15:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:50.492 16:15:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:50.492 16:15:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:50.492 16:15:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:50.492 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.492 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.750 nvme0n1 00:19:50.750 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.750 16:15:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.750 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.750 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.750 16:15:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:50.750 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.750 16:15:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.750 16:15:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.750 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.750 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.750 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.750 16:15:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:50.750 16:15:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:50.750 16:15:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:50.750 16:15:51 -- host/auth.sh@44 -- # digest=sha384 00:19:50.750 16:15:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.750 16:15:51 -- host/auth.sh@44 -- # keyid=3 00:19:50.750 16:15:51 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:50.750 16:15:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:50.750 16:15:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:50.750 16:15:51 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:50.750 16:15:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:50.750 16:15:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:50.750 16:15:51 -- host/auth.sh@68 -- # digest=sha384 00:19:50.750 16:15:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:50.750 16:15:51 -- host/auth.sh@68 -- # keyid=3 00:19:50.750 16:15:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.750 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.750 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:50.750 16:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.750 16:15:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:50.750 16:15:51 -- nvmf/common.sh@717 -- # local ip 00:19:50.750 16:15:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:50.750 16:15:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:50.750 16:15:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.750 16:15:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.750 16:15:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:50.750 16:15:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.750 16:15:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:50.750 16:15:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:50.750 16:15:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:50.750 16:15:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:50.750 16:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.750 16:15:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 nvme0n1 00:19:51.008 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.008 16:15:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.008 16:15:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:51.008 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.008 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.008 16:15:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.008 16:15:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.008 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.008 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.008 16:15:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:51.008 16:15:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:51.008 16:15:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:51.008 16:15:52 -- host/auth.sh@44 -- # digest=sha384 00:19:51.008 16:15:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.008 16:15:52 -- host/auth.sh@44 -- # keyid=4 00:19:51.008 16:15:52 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:51.008 16:15:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:51.008 16:15:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:51.008 16:15:52 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:51.008 16:15:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:51.008 16:15:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:51.008 16:15:52 -- host/auth.sh@68 -- # digest=sha384 00:19:51.008 16:15:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:51.008 16:15:52 -- host/auth.sh@68 -- # keyid=4 00:19:51.008 16:15:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.008 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.008 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.008 16:15:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:51.008 16:15:52 -- nvmf/common.sh@717 -- # local ip 00:19:51.008 16:15:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:51.008 16:15:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:51.008 16:15:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.008 16:15:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.008 16:15:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:51.008 16:15:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.008 16:15:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:51.008 16:15:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:51.008 16:15:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:51.009 16:15:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.009 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.009 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 nvme0n1 00:19:51.268 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.268 16:15:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.268 16:15:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:51.268 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.268 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.526 16:15:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.526 16:15:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.526 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.526 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.526 16:15:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.526 16:15:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:51.526 16:15:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:51.526 16:15:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:51.526 16:15:52 -- host/auth.sh@44 -- # digest=sha384 00:19:51.526 16:15:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:51.526 16:15:52 -- host/auth.sh@44 -- # keyid=0 00:19:51.526 16:15:52 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:51.526 16:15:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:51.526 16:15:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:51.526 16:15:52 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:51.526 16:15:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:51.526 16:15:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:51.526 16:15:52 -- host/auth.sh@68 -- # digest=sha384 00:19:51.526 16:15:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:51.526 16:15:52 -- host/auth.sh@68 -- # keyid=0 00:19:51.526 16:15:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.526 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.526 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 16:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.526 16:15:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:51.526 16:15:52 -- nvmf/common.sh@717 -- # local ip 00:19:51.526 16:15:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:51.526 16:15:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:51.526 16:15:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.526 16:15:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.526 16:15:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:51.526 16:15:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.526 16:15:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:51.526 16:15:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:51.526 16:15:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:51.526 16:15:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:51.526 16:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.526 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.784 nvme0n1 00:19:51.784 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.784 16:15:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.784 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.784 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:51.784 16:15:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:52.043 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.043 16:15:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.043 16:15:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.043 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.043 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.043 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.043 16:15:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:52.043 16:15:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:52.043 16:15:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:52.043 16:15:53 -- host/auth.sh@44 -- # digest=sha384 00:19:52.043 16:15:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.043 16:15:53 -- host/auth.sh@44 -- # keyid=1 00:19:52.043 16:15:53 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:52.043 16:15:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:52.043 16:15:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:52.043 16:15:53 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:52.043 16:15:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:52.043 16:15:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:52.043 16:15:53 -- host/auth.sh@68 -- # digest=sha384 00:19:52.043 16:15:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:52.043 16:15:53 -- host/auth.sh@68 -- # keyid=1 00:19:52.043 16:15:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.043 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.043 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.043 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.043 16:15:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:52.043 16:15:53 -- nvmf/common.sh@717 -- # local ip 00:19:52.043 16:15:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.043 16:15:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.043 16:15:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.043 16:15:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.043 16:15:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:52.043 16:15:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.043 16:15:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:52.043 16:15:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:52.043 16:15:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:52.043 16:15:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:52.043 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.043 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.609 nvme0n1 00:19:52.609 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.609 16:15:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.609 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.609 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.609 16:15:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:52.609 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.609 16:15:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.609 16:15:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.609 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.609 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.609 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.609 16:15:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:52.609 16:15:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:52.609 16:15:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:52.609 16:15:53 -- host/auth.sh@44 -- # digest=sha384 00:19:52.609 16:15:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.609 16:15:53 -- host/auth.sh@44 -- # keyid=2 00:19:52.609 16:15:53 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:52.609 16:15:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:52.609 16:15:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:52.609 16:15:53 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:52.609 16:15:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:52.609 16:15:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:52.609 16:15:53 -- host/auth.sh@68 -- # digest=sha384 00:19:52.609 16:15:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:52.609 16:15:53 -- host/auth.sh@68 -- # keyid=2 00:19:52.609 16:15:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.609 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.609 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.609 16:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.609 16:15:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:52.609 16:15:53 -- nvmf/common.sh@717 -- # local ip 00:19:52.610 16:15:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.610 16:15:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.610 16:15:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.610 16:15:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.610 16:15:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:52.610 16:15:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.610 16:15:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:52.610 16:15:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:52.610 16:15:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:52.610 16:15:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:52.610 16:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.610 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:19:53.174 nvme0n1 00:19:53.174 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.174 16:15:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.174 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.174 16:15:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.174 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.174 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.174 16:15:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.174 16:15:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.175 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.175 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.175 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.175 16:15:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:53.175 16:15:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:53.175 16:15:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:53.175 16:15:54 -- host/auth.sh@44 -- # digest=sha384 00:19:53.175 16:15:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.175 16:15:54 -- host/auth.sh@44 -- # keyid=3 00:19:53.175 16:15:54 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:53.175 16:15:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:53.175 16:15:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:53.175 16:15:54 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:53.175 16:15:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:53.175 16:15:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:53.175 16:15:54 -- host/auth.sh@68 -- # digest=sha384 00:19:53.175 16:15:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:53.175 16:15:54 -- host/auth.sh@68 -- # keyid=3 00:19:53.175 16:15:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.175 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.175 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.175 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.175 16:15:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:53.175 16:15:54 -- nvmf/common.sh@717 -- # local ip 00:19:53.175 16:15:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.175 16:15:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.175 16:15:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.175 16:15:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.175 16:15:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:53.175 16:15:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.175 16:15:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:53.175 16:15:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:53.175 16:15:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:53.175 16:15:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:53.175 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.175 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.740 nvme0n1 00:19:53.740 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.740 16:15:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.740 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.740 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.740 16:15:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.740 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.740 16:15:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.740 16:15:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.740 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.740 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.740 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.740 16:15:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:53.740 16:15:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:53.740 16:15:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:53.740 16:15:54 -- host/auth.sh@44 -- # digest=sha384 00:19:53.740 16:15:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.740 16:15:54 -- host/auth.sh@44 -- # keyid=4 00:19:53.740 16:15:54 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:53.740 16:15:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:53.740 16:15:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:53.740 16:15:54 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:53.740 16:15:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:53.740 16:15:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:53.740 16:15:54 -- host/auth.sh@68 -- # digest=sha384 00:19:53.740 16:15:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:53.740 16:15:54 -- host/auth.sh@68 -- # keyid=4 00:19:53.740 16:15:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.740 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.740 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.740 16:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.740 16:15:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:53.740 16:15:54 -- nvmf/common.sh@717 -- # local ip 00:19:53.740 16:15:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.740 16:15:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.740 16:15:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.740 16:15:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.740 16:15:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:53.740 16:15:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.740 16:15:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:53.740 16:15:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:53.740 16:15:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:53.740 16:15:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.740 16:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.740 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 nvme0n1 00:19:54.306 16:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.306 16:15:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.306 16:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.306 16:15:55 -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 16:15:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:54.306 16:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.306 16:15:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.306 16:15:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.306 16:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.306 16:15:55 -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 16:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.306 16:15:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.306 16:15:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:54.306 16:15:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:54.306 16:15:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:54.306 16:15:55 -- host/auth.sh@44 -- # digest=sha384 00:19:54.306 16:15:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:54.306 16:15:55 -- host/auth.sh@44 -- # keyid=0 00:19:54.306 16:15:55 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:54.306 16:15:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:54.306 16:15:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:54.306 16:15:55 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:54.306 16:15:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:54.306 16:15:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:54.306 16:15:55 -- host/auth.sh@68 -- # digest=sha384 00:19:54.306 16:15:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:54.306 16:15:55 -- host/auth.sh@68 -- # keyid=0 00:19:54.306 16:15:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.306 16:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.306 16:15:55 -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 16:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.306 16:15:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:54.306 16:15:55 -- nvmf/common.sh@717 -- # local ip 00:19:54.306 16:15:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:54.306 16:15:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:54.306 16:15:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.306 16:15:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.306 16:15:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:54.306 16:15:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.306 16:15:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:54.306 16:15:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:54.306 16:15:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:54.306 16:15:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:54.306 16:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.306 16:15:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.239 nvme0n1 00:19:55.239 16:15:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.239 16:15:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.239 16:15:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.239 16:15:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.239 16:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.239 16:15:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.239 16:15:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.239 16:15:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.239 16:15:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.239 16:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.239 16:15:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.239 16:15:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.239 16:15:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:55.239 16:15:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.239 16:15:56 -- host/auth.sh@44 -- # digest=sha384 00:19:55.239 16:15:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.239 16:15:56 -- host/auth.sh@44 -- # keyid=1 00:19:55.239 16:15:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:55.239 16:15:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:55.239 16:15:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:55.239 16:15:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:55.239 16:15:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:55.239 16:15:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.239 16:15:56 -- host/auth.sh@68 -- # digest=sha384 00:19:55.239 16:15:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:55.239 16:15:56 -- host/auth.sh@68 -- # keyid=1 00:19:55.239 16:15:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.239 16:15:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.239 16:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.239 16:15:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.239 16:15:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.239 16:15:56 -- nvmf/common.sh@717 -- # local ip 00:19:55.239 16:15:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.239 16:15:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.239 16:15:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.239 16:15:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.239 16:15:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.239 16:15:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.239 16:15:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.239 16:15:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.239 16:15:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.239 16:15:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:55.239 16:15:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.239 16:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:56.173 nvme0n1 00:19:56.173 16:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.173 16:15:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.173 16:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.173 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:56.173 16:15:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.173 16:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.173 16:15:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.173 16:15:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.173 16:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.173 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:56.173 16:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.173 16:15:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.173 16:15:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:56.173 16:15:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.173 16:15:57 -- host/auth.sh@44 -- # digest=sha384 00:19:56.173 16:15:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.173 16:15:57 -- host/auth.sh@44 -- # keyid=2 00:19:56.173 16:15:57 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:56.173 16:15:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:56.173 16:15:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:56.173 16:15:57 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:56.173 16:15:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:56.173 16:15:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.173 16:15:57 -- host/auth.sh@68 -- # digest=sha384 00:19:56.173 16:15:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:56.173 16:15:57 -- host/auth.sh@68 -- # keyid=2 00:19:56.173 16:15:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.173 16:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.173 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:56.173 16:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.173 16:15:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.173 16:15:57 -- nvmf/common.sh@717 -- # local ip 00:19:56.173 16:15:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.173 16:15:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.173 16:15:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.173 16:15:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.173 16:15:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.173 16:15:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.173 16:15:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.173 16:15:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.173 16:15:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.173 16:15:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:56.173 16:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.173 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:57.106 nvme0n1 00:19:57.106 16:15:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.106 16:15:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.106 16:15:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.106 16:15:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.106 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.106 16:15:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.106 16:15:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.106 16:15:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.106 16:15:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.106 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.106 16:15:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.106 16:15:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.106 16:15:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:57.106 16:15:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.106 16:15:58 -- host/auth.sh@44 -- # digest=sha384 00:19:57.106 16:15:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.106 16:15:58 -- host/auth.sh@44 -- # keyid=3 00:19:57.106 16:15:58 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:57.106 16:15:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:57.106 16:15:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:57.106 16:15:58 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:57.106 16:15:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:57.106 16:15:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.106 16:15:58 -- host/auth.sh@68 -- # digest=sha384 00:19:57.106 16:15:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:57.106 16:15:58 -- host/auth.sh@68 -- # keyid=3 00:19:57.106 16:15:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.106 16:15:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.106 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.106 16:15:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.106 16:15:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.106 16:15:58 -- nvmf/common.sh@717 -- # local ip 00:19:57.106 16:15:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.106 16:15:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.106 16:15:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.106 16:15:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.106 16:15:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.106 16:15:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.106 16:15:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.106 16:15:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.106 16:15:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.106 16:15:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:57.106 16:15:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.107 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.039 nvme0n1 00:19:58.039 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.039 16:15:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.039 16:15:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.039 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.039 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.039 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.039 16:15:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.039 16:15:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.039 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.039 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.297 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.297 16:15:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.297 16:15:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:58.297 16:15:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.297 16:15:59 -- host/auth.sh@44 -- # digest=sha384 00:19:58.297 16:15:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.297 16:15:59 -- host/auth.sh@44 -- # keyid=4 00:19:58.297 16:15:59 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:58.297 16:15:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:58.297 16:15:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:58.297 16:15:59 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:58.297 16:15:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:58.297 16:15:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.297 16:15:59 -- host/auth.sh@68 -- # digest=sha384 00:19:58.297 16:15:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:58.297 16:15:59 -- host/auth.sh@68 -- # keyid=4 00:19:58.297 16:15:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.297 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.297 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.297 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.297 16:15:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.297 16:15:59 -- nvmf/common.sh@717 -- # local ip 00:19:58.297 16:15:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.297 16:15:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.297 16:15:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.297 16:15:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.297 16:15:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.297 16:15:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.297 16:15:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.297 16:15:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.297 16:15:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.297 16:15:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.297 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.297 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 nvme0n1 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:59.267 16:16:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.267 16:16:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.267 16:16:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:59.267 16:16:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # digest=sha512 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # keyid=0 00:19:59.267 16:16:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:59.267 16:16:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.267 16:16:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:19:59.267 16:16:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:59.267 16:16:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # digest=sha512 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # keyid=0 00:19:59.267 16:16:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.267 16:16:00 -- nvmf/common.sh@717 -- # local ip 00:19:59.267 16:16:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.267 16:16:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.267 16:16:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.267 16:16:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 nvme0n1 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.267 16:16:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:59.267 16:16:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # digest=sha512 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@44 -- # keyid=1 00:19:59.267 16:16:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:59.267 16:16:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.267 16:16:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:19:59.267 16:16:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:59.267 16:16:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # digest=sha512 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:59.267 16:16:00 -- host/auth.sh@68 -- # keyid=1 00:19:59.267 16:16:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.267 16:16:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.267 16:16:00 -- nvmf/common.sh@717 -- # local ip 00:19:59.267 16:16:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.267 16:16:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.267 16:16:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.267 16:16:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.267 16:16:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.267 16:16:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:59.267 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.267 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.525 nvme0n1 00:19:59.525 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.525 16:16:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.525 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.525 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.525 16:16:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.525 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.525 16:16:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.525 16:16:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.525 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.525 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.525 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.525 16:16:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.525 16:16:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:59.525 16:16:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.526 16:16:00 -- host/auth.sh@44 -- # digest=sha512 00:19:59.526 16:16:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.526 16:16:00 -- host/auth.sh@44 -- # keyid=2 00:19:59.526 16:16:00 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:59.526 16:16:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.526 16:16:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:59.526 16:16:00 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:19:59.526 16:16:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:59.526 16:16:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.526 16:16:00 -- host/auth.sh@68 -- # digest=sha512 00:19:59.526 16:16:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:59.526 16:16:00 -- host/auth.sh@68 -- # keyid=2 00:19:59.526 16:16:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.526 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.526 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.526 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.526 16:16:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.526 16:16:00 -- nvmf/common.sh@717 -- # local ip 00:19:59.526 16:16:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.526 16:16:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.526 16:16:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.526 16:16:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.526 16:16:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.526 16:16:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.526 16:16:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.526 16:16:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.526 16:16:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.526 16:16:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:59.526 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.526 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.526 nvme0n1 00:19:59.526 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.526 16:16:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.526 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.526 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.526 16:16:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.526 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.784 16:16:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.784 16:16:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.784 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.784 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.784 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.784 16:16:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.784 16:16:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:59.784 16:16:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.784 16:16:00 -- host/auth.sh@44 -- # digest=sha512 00:19:59.785 16:16:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.785 16:16:00 -- host/auth.sh@44 -- # keyid=3 00:19:59.785 16:16:00 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:59.785 16:16:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.785 16:16:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:59.785 16:16:00 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:19:59.785 16:16:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:59.785 16:16:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.785 16:16:00 -- host/auth.sh@68 -- # digest=sha512 00:19:59.785 16:16:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:59.785 16:16:00 -- host/auth.sh@68 -- # keyid=3 00:19:59.785 16:16:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.785 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.785 16:16:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.785 16:16:00 -- nvmf/common.sh@717 -- # local ip 00:19:59.785 16:16:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.785 16:16:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.785 16:16:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.785 16:16:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.785 16:16:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.785 16:16:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.785 16:16:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.785 16:16:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.785 16:16:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.785 16:16:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:59.785 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 nvme0n1 00:19:59.785 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.785 16:16:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.785 16:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.785 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 16:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.785 16:16:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.785 16:16:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.785 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.785 16:16:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.785 16:16:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:59.785 16:16:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.785 16:16:01 -- host/auth.sh@44 -- # digest=sha512 00:19:59.785 16:16:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.785 16:16:01 -- host/auth.sh@44 -- # keyid=4 00:19:59.785 16:16:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:59.785 16:16:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.785 16:16:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:59.785 16:16:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:19:59.785 16:16:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:59.785 16:16:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.785 16:16:01 -- host/auth.sh@68 -- # digest=sha512 00:19:59.785 16:16:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:59.785 16:16:01 -- host/auth.sh@68 -- # keyid=4 00:19:59.785 16:16:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.785 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.785 16:16:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.785 16:16:01 -- nvmf/common.sh@717 -- # local ip 00:19:59.785 16:16:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.785 16:16:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.785 16:16:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.785 16:16:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.785 16:16:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.785 16:16:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.785 16:16:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.785 16:16:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.785 16:16:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.785 16:16:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.785 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.785 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.043 nvme0n1 00:20:00.043 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.043 16:16:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.043 16:16:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.043 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.043 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.043 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.043 16:16:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.043 16:16:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.043 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.043 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.043 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.043 16:16:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.043 16:16:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.043 16:16:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:00.043 16:16:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.043 16:16:01 -- host/auth.sh@44 -- # digest=sha512 00:20:00.043 16:16:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.043 16:16:01 -- host/auth.sh@44 -- # keyid=0 00:20:00.043 16:16:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:00.043 16:16:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.043 16:16:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:00.043 16:16:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:00.043 16:16:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:00.043 16:16:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.043 16:16:01 -- host/auth.sh@68 -- # digest=sha512 00:20:00.043 16:16:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:00.043 16:16:01 -- host/auth.sh@68 -- # keyid=0 00:20:00.043 16:16:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.043 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.043 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.043 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.043 16:16:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.043 16:16:01 -- nvmf/common.sh@717 -- # local ip 00:20:00.043 16:16:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.043 16:16:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.043 16:16:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.043 16:16:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.043 16:16:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.043 16:16:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.043 16:16:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.043 16:16:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.043 16:16:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.043 16:16:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:00.043 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.043 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 nvme0n1 00:20:00.301 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.301 16:16:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.301 16:16:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.301 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.301 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.301 16:16:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.301 16:16:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.301 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.301 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.301 16:16:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.301 16:16:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:00.301 16:16:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.301 16:16:01 -- host/auth.sh@44 -- # digest=sha512 00:20:00.301 16:16:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.301 16:16:01 -- host/auth.sh@44 -- # keyid=1 00:20:00.301 16:16:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:00.301 16:16:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.301 16:16:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:00.301 16:16:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:00.301 16:16:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:00.301 16:16:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.301 16:16:01 -- host/auth.sh@68 -- # digest=sha512 00:20:00.301 16:16:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:00.301 16:16:01 -- host/auth.sh@68 -- # keyid=1 00:20:00.301 16:16:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.301 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.301 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.301 16:16:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.301 16:16:01 -- nvmf/common.sh@717 -- # local ip 00:20:00.301 16:16:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.301 16:16:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.301 16:16:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.301 16:16:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.301 16:16:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.302 16:16:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.302 16:16:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.302 16:16:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.302 16:16:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.302 16:16:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:00.302 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.302 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 nvme0n1 00:20:00.302 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.302 16:16:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.302 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.302 16:16:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.302 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.559 16:16:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.559 16:16:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.559 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.559 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.559 16:16:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.559 16:16:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:00.559 16:16:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.559 16:16:01 -- host/auth.sh@44 -- # digest=sha512 00:20:00.559 16:16:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.559 16:16:01 -- host/auth.sh@44 -- # keyid=2 00:20:00.559 16:16:01 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:00.559 16:16:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.559 16:16:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:00.559 16:16:01 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:00.559 16:16:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:00.559 16:16:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.559 16:16:01 -- host/auth.sh@68 -- # digest=sha512 00:20:00.559 16:16:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:00.559 16:16:01 -- host/auth.sh@68 -- # keyid=2 00:20:00.559 16:16:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.559 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.559 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.559 16:16:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.559 16:16:01 -- nvmf/common.sh@717 -- # local ip 00:20:00.559 16:16:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.560 16:16:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.560 16:16:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.560 16:16:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.560 16:16:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.560 16:16:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.560 16:16:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.560 16:16:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.560 16:16:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.560 16:16:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:00.560 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.560 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 nvme0n1 00:20:00.560 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.560 16:16:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.560 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.560 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 16:16:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.560 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.560 16:16:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.560 16:16:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.560 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.560 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.818 16:16:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:00.818 16:16:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.818 16:16:01 -- host/auth.sh@44 -- # digest=sha512 00:20:00.818 16:16:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.818 16:16:01 -- host/auth.sh@44 -- # keyid=3 00:20:00.818 16:16:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:00.818 16:16:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.818 16:16:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:00.818 16:16:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:00.818 16:16:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:00.818 16:16:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.818 16:16:01 -- host/auth.sh@68 -- # digest=sha512 00:20:00.818 16:16:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:00.818 16:16:01 -- host/auth.sh@68 -- # keyid=3 00:20:00.818 16:16:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.818 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 16:16:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.818 16:16:01 -- nvmf/common.sh@717 -- # local ip 00:20:00.818 16:16:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.818 16:16:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.818 16:16:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.818 16:16:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.818 16:16:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.818 16:16:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.818 16:16:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.818 16:16:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.818 16:16:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.818 16:16:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:00.818 16:16:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 nvme0n1 00:20:00.818 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.818 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 16:16:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.818 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.818 16:16:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.818 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.818 16:16:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:00.818 16:16:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.818 16:16:02 -- host/auth.sh@44 -- # digest=sha512 00:20:00.818 16:16:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.818 16:16:02 -- host/auth.sh@44 -- # keyid=4 00:20:00.818 16:16:02 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:00.818 16:16:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.818 16:16:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:00.818 16:16:02 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:00.818 16:16:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:00.818 16:16:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.818 16:16:02 -- host/auth.sh@68 -- # digest=sha512 00:20:00.818 16:16:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:00.818 16:16:02 -- host/auth.sh@68 -- # keyid=4 00:20:00.818 16:16:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.818 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.818 16:16:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.818 16:16:02 -- nvmf/common.sh@717 -- # local ip 00:20:00.818 16:16:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.818 16:16:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.818 16:16:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.818 16:16:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.818 16:16:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.818 16:16:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.818 16:16:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.818 16:16:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.818 16:16:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.818 16:16:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.818 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.818 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.076 nvme0n1 00:20:01.076 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.076 16:16:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.076 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.076 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.076 16:16:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.076 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.076 16:16:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.076 16:16:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.076 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.076 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.076 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.076 16:16:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.076 16:16:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.076 16:16:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:01.076 16:16:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.076 16:16:02 -- host/auth.sh@44 -- # digest=sha512 00:20:01.076 16:16:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.076 16:16:02 -- host/auth.sh@44 -- # keyid=0 00:20:01.076 16:16:02 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:01.076 16:16:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:01.076 16:16:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:01.076 16:16:02 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:01.076 16:16:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:01.076 16:16:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.076 16:16:02 -- host/auth.sh@68 -- # digest=sha512 00:20:01.076 16:16:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:01.076 16:16:02 -- host/auth.sh@68 -- # keyid=0 00:20:01.076 16:16:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.076 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.076 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.076 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.076 16:16:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.076 16:16:02 -- nvmf/common.sh@717 -- # local ip 00:20:01.076 16:16:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.076 16:16:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.076 16:16:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.076 16:16:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.076 16:16:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.076 16:16:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.076 16:16:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.076 16:16:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.076 16:16:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.076 16:16:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:01.076 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.076 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.335 nvme0n1 00:20:01.335 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.335 16:16:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.335 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.335 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.335 16:16:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.335 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.335 16:16:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.335 16:16:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.335 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.335 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.593 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.593 16:16:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.593 16:16:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:01.593 16:16:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.593 16:16:02 -- host/auth.sh@44 -- # digest=sha512 00:20:01.593 16:16:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.593 16:16:02 -- host/auth.sh@44 -- # keyid=1 00:20:01.593 16:16:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:01.593 16:16:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:01.593 16:16:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:01.593 16:16:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:01.593 16:16:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:01.593 16:16:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.593 16:16:02 -- host/auth.sh@68 -- # digest=sha512 00:20:01.593 16:16:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:01.593 16:16:02 -- host/auth.sh@68 -- # keyid=1 00:20:01.593 16:16:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.593 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.593 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.593 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.593 16:16:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.593 16:16:02 -- nvmf/common.sh@717 -- # local ip 00:20:01.593 16:16:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.593 16:16:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.593 16:16:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.593 16:16:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.593 16:16:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.593 16:16:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.593 16:16:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.593 16:16:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.594 16:16:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.594 16:16:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:01.594 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.594 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.851 nvme0n1 00:20:01.851 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.851 16:16:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.851 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.851 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.851 16:16:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.851 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.851 16:16:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.851 16:16:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.851 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.851 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.851 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.851 16:16:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.851 16:16:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:01.851 16:16:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.851 16:16:02 -- host/auth.sh@44 -- # digest=sha512 00:20:01.851 16:16:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.851 16:16:02 -- host/auth.sh@44 -- # keyid=2 00:20:01.851 16:16:02 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:01.851 16:16:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:01.851 16:16:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:01.851 16:16:02 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:01.851 16:16:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:01.851 16:16:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.851 16:16:02 -- host/auth.sh@68 -- # digest=sha512 00:20:01.851 16:16:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:01.851 16:16:02 -- host/auth.sh@68 -- # keyid=2 00:20:01.851 16:16:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.851 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.851 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.851 16:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.851 16:16:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.851 16:16:02 -- nvmf/common.sh@717 -- # local ip 00:20:01.851 16:16:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.851 16:16:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.851 16:16:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.851 16:16:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.851 16:16:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.851 16:16:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.851 16:16:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.851 16:16:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.851 16:16:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.851 16:16:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:01.851 16:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.851 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:02.109 nvme0n1 00:20:02.109 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.109 16:16:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.109 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.109 16:16:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.109 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.109 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.109 16:16:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.109 16:16:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.109 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.109 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.109 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.109 16:16:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.109 16:16:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:02.109 16:16:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.109 16:16:03 -- host/auth.sh@44 -- # digest=sha512 00:20:02.109 16:16:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.109 16:16:03 -- host/auth.sh@44 -- # keyid=3 00:20:02.109 16:16:03 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:02.109 16:16:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:02.109 16:16:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:02.109 16:16:03 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:02.109 16:16:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:02.109 16:16:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.109 16:16:03 -- host/auth.sh@68 -- # digest=sha512 00:20:02.109 16:16:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:02.109 16:16:03 -- host/auth.sh@68 -- # keyid=3 00:20:02.109 16:16:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.109 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.109 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.109 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.109 16:16:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.109 16:16:03 -- nvmf/common.sh@717 -- # local ip 00:20:02.109 16:16:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.109 16:16:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.109 16:16:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.109 16:16:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.109 16:16:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:02.109 16:16:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.109 16:16:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:02.109 16:16:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:02.109 16:16:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:02.109 16:16:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:02.109 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.109 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.367 nvme0n1 00:20:02.367 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.367 16:16:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.367 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.367 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.367 16:16:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.367 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.367 16:16:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.367 16:16:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.367 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.367 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.367 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.367 16:16:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.367 16:16:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:02.367 16:16:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.367 16:16:03 -- host/auth.sh@44 -- # digest=sha512 00:20:02.367 16:16:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.367 16:16:03 -- host/auth.sh@44 -- # keyid=4 00:20:02.367 16:16:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:02.367 16:16:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:02.367 16:16:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:02.367 16:16:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:02.367 16:16:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:02.367 16:16:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.367 16:16:03 -- host/auth.sh@68 -- # digest=sha512 00:20:02.367 16:16:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:02.367 16:16:03 -- host/auth.sh@68 -- # keyid=4 00:20:02.367 16:16:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.367 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.367 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.367 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.367 16:16:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.367 16:16:03 -- nvmf/common.sh@717 -- # local ip 00:20:02.367 16:16:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.367 16:16:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.367 16:16:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.367 16:16:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.367 16:16:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:02.367 16:16:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.367 16:16:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:02.367 16:16:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:02.367 16:16:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:02.367 16:16:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.367 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.367 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.625 nvme0n1 00:20:02.625 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.625 16:16:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.625 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.625 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.625 16:16:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.625 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.625 16:16:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.625 16:16:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.625 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.625 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.882 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.882 16:16:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.882 16:16:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.882 16:16:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:02.882 16:16:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.882 16:16:03 -- host/auth.sh@44 -- # digest=sha512 00:20:02.882 16:16:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.882 16:16:03 -- host/auth.sh@44 -- # keyid=0 00:20:02.882 16:16:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:02.882 16:16:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:02.882 16:16:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:02.882 16:16:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:02.882 16:16:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:02.882 16:16:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.882 16:16:03 -- host/auth.sh@68 -- # digest=sha512 00:20:02.882 16:16:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:02.882 16:16:03 -- host/auth.sh@68 -- # keyid=0 00:20:02.882 16:16:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.882 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.882 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.882 16:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.882 16:16:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.882 16:16:03 -- nvmf/common.sh@717 -- # local ip 00:20:02.882 16:16:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.882 16:16:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.882 16:16:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.882 16:16:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.882 16:16:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:02.882 16:16:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.882 16:16:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:02.882 16:16:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:02.882 16:16:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:02.882 16:16:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:02.882 16:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.882 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:03.448 nvme0n1 00:20:03.448 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.448 16:16:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.448 16:16:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:03.448 16:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.448 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.448 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.448 16:16:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.448 16:16:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.448 16:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.448 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.448 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.448 16:16:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:03.448 16:16:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:03.448 16:16:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:03.448 16:16:04 -- host/auth.sh@44 -- # digest=sha512 00:20:03.448 16:16:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.448 16:16:04 -- host/auth.sh@44 -- # keyid=1 00:20:03.448 16:16:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:03.448 16:16:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:03.448 16:16:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:03.448 16:16:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:03.448 16:16:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:03.448 16:16:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:03.448 16:16:04 -- host/auth.sh@68 -- # digest=sha512 00:20:03.448 16:16:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:03.448 16:16:04 -- host/auth.sh@68 -- # keyid=1 00:20:03.448 16:16:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:03.448 16:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.448 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.448 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.448 16:16:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:03.448 16:16:04 -- nvmf/common.sh@717 -- # local ip 00:20:03.448 16:16:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:03.448 16:16:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:03.448 16:16:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.448 16:16:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.448 16:16:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:03.448 16:16:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.448 16:16:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:03.448 16:16:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:03.448 16:16:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:03.448 16:16:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:03.448 16:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.448 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.705 nvme0n1 00:20:03.705 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.705 16:16:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.705 16:16:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:03.705 16:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.705 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.705 16:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.965 16:16:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.965 16:16:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.965 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.965 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:03.965 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.965 16:16:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:03.965 16:16:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:03.965 16:16:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:03.965 16:16:05 -- host/auth.sh@44 -- # digest=sha512 00:20:03.965 16:16:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.965 16:16:05 -- host/auth.sh@44 -- # keyid=2 00:20:03.965 16:16:05 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:03.965 16:16:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:03.965 16:16:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:03.965 16:16:05 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:03.965 16:16:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:03.965 16:16:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:03.965 16:16:05 -- host/auth.sh@68 -- # digest=sha512 00:20:03.965 16:16:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:03.965 16:16:05 -- host/auth.sh@68 -- # keyid=2 00:20:03.965 16:16:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:03.965 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.965 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:03.965 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.965 16:16:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:03.965 16:16:05 -- nvmf/common.sh@717 -- # local ip 00:20:03.965 16:16:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:03.965 16:16:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:03.965 16:16:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.965 16:16:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.965 16:16:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:03.965 16:16:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.965 16:16:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:03.965 16:16:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:03.965 16:16:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:03.965 16:16:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:03.965 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.965 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 nvme0n1 00:20:04.564 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.564 16:16:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.564 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.564 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 16:16:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:04.564 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.564 16:16:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.564 16:16:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.564 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.564 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.564 16:16:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:04.564 16:16:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:04.564 16:16:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:04.564 16:16:05 -- host/auth.sh@44 -- # digest=sha512 00:20:04.564 16:16:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.564 16:16:05 -- host/auth.sh@44 -- # keyid=3 00:20:04.564 16:16:05 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:04.564 16:16:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:04.564 16:16:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:04.564 16:16:05 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:04.564 16:16:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:04.564 16:16:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:04.564 16:16:05 -- host/auth.sh@68 -- # digest=sha512 00:20:04.564 16:16:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:04.564 16:16:05 -- host/auth.sh@68 -- # keyid=3 00:20:04.564 16:16:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.564 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.564 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 16:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.564 16:16:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:04.564 16:16:05 -- nvmf/common.sh@717 -- # local ip 00:20:04.564 16:16:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:04.564 16:16:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:04.564 16:16:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.564 16:16:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.564 16:16:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:04.564 16:16:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.564 16:16:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:04.564 16:16:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:04.564 16:16:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:04.564 16:16:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:04.564 16:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.564 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.828 nvme0n1 00:20:04.828 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.086 16:16:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.086 16:16:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:05.086 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.086 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.086 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.086 16:16:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.086 16:16:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.086 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.086 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.086 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.086 16:16:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:05.086 16:16:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:05.086 16:16:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:05.086 16:16:06 -- host/auth.sh@44 -- # digest=sha512 00:20:05.086 16:16:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.086 16:16:06 -- host/auth.sh@44 -- # keyid=4 00:20:05.086 16:16:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:05.086 16:16:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:05.086 16:16:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:05.086 16:16:06 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:05.086 16:16:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:05.086 16:16:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:05.087 16:16:06 -- host/auth.sh@68 -- # digest=sha512 00:20:05.087 16:16:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:05.087 16:16:06 -- host/auth.sh@68 -- # keyid=4 00:20:05.087 16:16:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.087 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.087 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.087 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.087 16:16:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:05.087 16:16:06 -- nvmf/common.sh@717 -- # local ip 00:20:05.087 16:16:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.087 16:16:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.087 16:16:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.087 16:16:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.087 16:16:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:05.087 16:16:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.087 16:16:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:05.087 16:16:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:05.087 16:16:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:05.087 16:16:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.087 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.087 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.653 nvme0n1 00:20:05.653 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.653 16:16:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.653 16:16:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:05.653 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.653 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.653 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.653 16:16:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.653 16:16:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.653 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.653 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.653 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.653 16:16:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.653 16:16:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:05.653 16:16:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:05.653 16:16:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:05.653 16:16:06 -- host/auth.sh@44 -- # digest=sha512 00:20:05.653 16:16:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.653 16:16:06 -- host/auth.sh@44 -- # keyid=0 00:20:05.653 16:16:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:05.653 16:16:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:05.653 16:16:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:05.653 16:16:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ODI3M2M4ZDhmMmQwNDBkODNjODM1YmQ2ODQ5NGUzZTGffkdk: 00:20:05.653 16:16:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:05.653 16:16:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:05.653 16:16:06 -- host/auth.sh@68 -- # digest=sha512 00:20:05.653 16:16:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:05.653 16:16:06 -- host/auth.sh@68 -- # keyid=0 00:20:05.653 16:16:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.653 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.653 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.653 16:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.653 16:16:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:05.653 16:16:06 -- nvmf/common.sh@717 -- # local ip 00:20:05.653 16:16:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.653 16:16:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.653 16:16:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.653 16:16:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.653 16:16:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:05.653 16:16:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.653 16:16:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:05.653 16:16:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:05.653 16:16:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:05.653 16:16:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:05.653 16:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.653 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:06.586 nvme0n1 00:20:06.586 16:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.586 16:16:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.586 16:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.586 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.586 16:16:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:06.586 16:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.586 16:16:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.586 16:16:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.586 16:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.586 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.586 16:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.586 16:16:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:06.586 16:16:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:06.586 16:16:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:06.586 16:16:07 -- host/auth.sh@44 -- # digest=sha512 00:20:06.586 16:16:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.586 16:16:07 -- host/auth.sh@44 -- # keyid=1 00:20:06.586 16:16:07 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:06.586 16:16:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:06.586 16:16:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:06.586 16:16:07 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:06.586 16:16:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:06.586 16:16:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:06.586 16:16:07 -- host/auth.sh@68 -- # digest=sha512 00:20:06.586 16:16:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:06.586 16:16:07 -- host/auth.sh@68 -- # keyid=1 00:20:06.586 16:16:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.586 16:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.586 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.587 16:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.587 16:16:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:06.587 16:16:07 -- nvmf/common.sh@717 -- # local ip 00:20:06.587 16:16:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:06.587 16:16:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:06.587 16:16:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.587 16:16:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.587 16:16:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:06.587 16:16:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.587 16:16:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:06.587 16:16:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:06.587 16:16:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:06.587 16:16:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:06.587 16:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.587 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:07.519 nvme0n1 00:20:07.519 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.519 16:16:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.519 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.519 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:07.519 16:16:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.519 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.519 16:16:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.519 16:16:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.519 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.519 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:07.519 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.519 16:16:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:07.519 16:16:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:07.519 16:16:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.519 16:16:08 -- host/auth.sh@44 -- # digest=sha512 00:20:07.519 16:16:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.519 16:16:08 -- host/auth.sh@44 -- # keyid=2 00:20:07.519 16:16:08 -- host/auth.sh@45 -- # key=DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:07.519 16:16:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:07.519 16:16:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:07.519 16:16:08 -- host/auth.sh@49 -- # echo DHHC-1:01:MTMxY2FkYzk3NDQ2ZmE1MzJlMDRhMzI2YmJhNTljMGND93/e: 00:20:07.519 16:16:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:07.519 16:16:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:07.519 16:16:08 -- host/auth.sh@68 -- # digest=sha512 00:20:07.519 16:16:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:07.519 16:16:08 -- host/auth.sh@68 -- # keyid=2 00:20:07.519 16:16:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:07.519 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.520 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:07.520 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.520 16:16:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:07.520 16:16:08 -- nvmf/common.sh@717 -- # local ip 00:20:07.520 16:16:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.520 16:16:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.520 16:16:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.520 16:16:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.520 16:16:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.520 16:16:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.520 16:16:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.520 16:16:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.520 16:16:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.520 16:16:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:07.520 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.520 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:08.453 nvme0n1 00:20:08.453 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.453 16:16:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.453 16:16:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.453 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.453 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:08.453 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.453 16:16:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.453 16:16:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.453 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.453 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:08.453 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.453 16:16:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.453 16:16:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:08.453 16:16:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.453 16:16:09 -- host/auth.sh@44 -- # digest=sha512 00:20:08.453 16:16:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.453 16:16:09 -- host/auth.sh@44 -- # keyid=3 00:20:08.453 16:16:09 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:08.453 16:16:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:08.453 16:16:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:08.453 16:16:09 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGVmMTc0M2I0ZWE4YmM4ZTM1NWZmMDA5ZTIyMDkwNjMzMGZiNWI1YThiMTMxZTMx/i3aOw==: 00:20:08.453 16:16:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:08.453 16:16:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.453 16:16:09 -- host/auth.sh@68 -- # digest=sha512 00:20:08.453 16:16:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:08.453 16:16:09 -- host/auth.sh@68 -- # keyid=3 00:20:08.453 16:16:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.453 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.453 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:08.453 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.453 16:16:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.453 16:16:09 -- nvmf/common.sh@717 -- # local ip 00:20:08.453 16:16:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.453 16:16:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.453 16:16:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.453 16:16:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.453 16:16:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.453 16:16:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.711 16:16:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.711 16:16:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.711 16:16:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.711 16:16:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:08.711 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.711 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:09.644 nvme0n1 00:20:09.644 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.644 16:16:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.644 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.644 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:09.644 16:16:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.644 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.644 16:16:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.644 16:16:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.644 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.644 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:09.644 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.644 16:16:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.644 16:16:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:09.644 16:16:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.644 16:16:10 -- host/auth.sh@44 -- # digest=sha512 00:20:09.644 16:16:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.644 16:16:10 -- host/auth.sh@44 -- # keyid=4 00:20:09.644 16:16:10 -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:09.644 16:16:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:09.644 16:16:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:09.644 16:16:10 -- host/auth.sh@49 -- # echo DHHC-1:03:MTE2OGQ3Y2U1MjBkZTI0OWE0MTU5MGNmOGU5YWRmMzA5N2JiOWNmZGUyN2FkYjgyYTdkNzI4MWFjMTBlN2UzZmQQwCw=: 00:20:09.644 16:16:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:09.644 16:16:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.644 16:16:10 -- host/auth.sh@68 -- # digest=sha512 00:20:09.644 16:16:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:09.644 16:16:10 -- host/auth.sh@68 -- # keyid=4 00:20:09.644 16:16:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.644 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.644 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:09.644 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.644 16:16:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.644 16:16:10 -- nvmf/common.sh@717 -- # local ip 00:20:09.644 16:16:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.644 16:16:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.644 16:16:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.644 16:16:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.644 16:16:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.644 16:16:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.644 16:16:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.644 16:16:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.644 16:16:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.644 16:16:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.644 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.644 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 nvme0n1 00:20:10.578 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.578 16:16:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.578 16:16:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.578 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.578 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.578 16:16:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.578 16:16:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.578 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.578 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.578 16:16:11 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:10.578 16:16:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.578 16:16:11 -- host/auth.sh@44 -- # digest=sha256 00:20:10.578 16:16:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.578 16:16:11 -- host/auth.sh@44 -- # keyid=1 00:20:10.578 16:16:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:10.578 16:16:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:10.578 16:16:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:10.578 16:16:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ1ZTM1MjYyNGQ2YTZjMjBmYjhjM2I0YTE1M2YyZmVlNDI3ZGM0NDY1MDE4MWRiB21idA==: 00:20:10.578 16:16:11 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.578 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.578 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.578 16:16:11 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:10.578 16:16:11 -- nvmf/common.sh@717 -- # local ip 00:20:10.578 16:16:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.578 16:16:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.578 16:16:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.578 16:16:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.578 16:16:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.578 16:16:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.578 16:16:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.578 16:16:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.578 16:16:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.578 16:16:11 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.578 16:16:11 -- common/autotest_common.sh@638 -- # local es=0 00:20:10.578 16:16:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.578 16:16:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:10.578 16:16:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.578 16:16:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:10.578 16:16:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.578 16:16:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.578 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.578 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 request: 00:20:10.578 { 00:20:10.578 "name": "nvme0", 00:20:10.578 "trtype": "tcp", 00:20:10.578 "traddr": "10.0.0.1", 00:20:10.578 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:10.578 "adrfam": "ipv4", 00:20:10.578 "trsvcid": "4420", 00:20:10.578 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:10.578 "method": "bdev_nvme_attach_controller", 00:20:10.578 "req_id": 1 00:20:10.578 } 00:20:10.578 Got JSON-RPC error response 00:20:10.578 response: 00:20:10.579 { 00:20:10.579 "code": -32602, 00:20:10.579 "message": "Invalid parameters" 00:20:10.579 } 00:20:10.579 16:16:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:10.579 16:16:11 -- common/autotest_common.sh@641 -- # es=1 00:20:10.579 16:16:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:10.579 16:16:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:10.579 16:16:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:10.579 16:16:11 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.579 16:16:11 -- host/auth.sh@121 -- # jq length 00:20:10.579 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.579 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.579 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.579 16:16:11 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:10.579 16:16:11 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:10.579 16:16:11 -- nvmf/common.sh@717 -- # local ip 00:20:10.579 16:16:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.579 16:16:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.579 16:16:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.579 16:16:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.579 16:16:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.579 16:16:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.579 16:16:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.579 16:16:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.579 16:16:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.579 16:16:11 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.579 16:16:11 -- common/autotest_common.sh@638 -- # local es=0 00:20:10.579 16:16:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.579 16:16:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:10.579 16:16:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.579 16:16:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:10.579 16:16:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.579 16:16:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.579 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.579 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.579 request: 00:20:10.579 { 00:20:10.579 "name": "nvme0", 00:20:10.579 "trtype": "tcp", 00:20:10.579 "traddr": "10.0.0.1", 00:20:10.579 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:10.579 "adrfam": "ipv4", 00:20:10.579 "trsvcid": "4420", 00:20:10.579 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:10.579 "dhchap_key": "key2", 00:20:10.579 "method": "bdev_nvme_attach_controller", 00:20:10.579 "req_id": 1 00:20:10.579 } 00:20:10.579 Got JSON-RPC error response 00:20:10.579 response: 00:20:10.579 { 00:20:10.579 "code": -32602, 00:20:10.579 "message": "Invalid parameters" 00:20:10.579 } 00:20:10.579 16:16:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:10.579 16:16:11 -- common/autotest_common.sh@641 -- # es=1 00:20:10.579 16:16:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:10.579 16:16:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:10.579 16:16:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:10.579 16:16:11 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.579 16:16:11 -- host/auth.sh@127 -- # jq length 00:20:10.579 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.579 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.579 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.579 16:16:11 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:10.579 16:16:11 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:10.579 16:16:11 -- host/auth.sh@130 -- # cleanup 00:20:10.579 16:16:11 -- host/auth.sh@24 -- # nvmftestfini 00:20:10.579 16:16:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:10.579 16:16:11 -- nvmf/common.sh@117 -- # sync 00:20:10.579 16:16:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.579 16:16:11 -- nvmf/common.sh@120 -- # set +e 00:20:10.579 16:16:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.579 16:16:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.579 rmmod nvme_tcp 00:20:10.579 rmmod nvme_fabrics 00:20:10.579 16:16:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.579 16:16:11 -- nvmf/common.sh@124 -- # set -e 00:20:10.579 16:16:11 -- nvmf/common.sh@125 -- # return 0 00:20:10.579 16:16:11 -- nvmf/common.sh@478 -- # '[' -n 3460112 ']' 00:20:10.579 16:16:11 -- nvmf/common.sh@479 -- # killprocess 3460112 00:20:10.579 16:16:11 -- common/autotest_common.sh@936 -- # '[' -z 3460112 ']' 00:20:10.579 16:16:11 -- common/autotest_common.sh@940 -- # kill -0 3460112 00:20:10.579 16:16:11 -- common/autotest_common.sh@941 -- # uname 00:20:10.579 16:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.579 16:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3460112 00:20:10.838 16:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:10.838 16:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:10.838 16:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3460112' 00:20:10.838 killing process with pid 3460112 00:20:10.838 16:16:11 -- common/autotest_common.sh@955 -- # kill 3460112 00:20:10.838 16:16:11 -- common/autotest_common.sh@960 -- # wait 3460112 00:20:11.097 16:16:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:11.097 16:16:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:11.097 16:16:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:11.097 16:16:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.097 16:16:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.097 16:16:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.097 16:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.097 16:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.000 16:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.000 16:16:14 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:13.000 16:16:14 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:13.000 16:16:14 -- host/auth.sh@27 -- # clean_kernel_target 00:20:13.000 16:16:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:13.000 16:16:14 -- nvmf/common.sh@675 -- # echo 0 00:20:13.000 16:16:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:13.000 16:16:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:13.000 16:16:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:13.000 16:16:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:13.000 16:16:14 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:13.000 16:16:14 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:13.000 16:16:14 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:14.374 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:14.374 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:14.374 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:14.942 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:20:15.200 16:16:16 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EbG /tmp/spdk.key-null.44I /tmp/spdk.key-sha256.zo7 /tmp/spdk.key-sha384.C0z /tmp/spdk.key-sha512.JF6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:20:15.200 16:16:16 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:16.134 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:16.134 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:16.134 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:16.134 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:16.134 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:16.134 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:16.134 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:16.134 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:16.134 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:16.134 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:16.134 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:16.134 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:16.134 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:16.134 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:16.134 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:16.134 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:16.134 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:16.392 00:20:16.392 real 0m45.919s 00:20:16.392 user 0m43.465s 00:20:16.392 sys 0m5.402s 00:20:16.392 16:16:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.392 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:16.392 ************************************ 00:20:16.392 END TEST nvmf_auth 00:20:16.392 ************************************ 00:20:16.392 16:16:17 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:20:16.392 16:16:17 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:16.392 16:16:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.392 16:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.392 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:16.392 ************************************ 00:20:16.392 START TEST nvmf_digest 00:20:16.392 ************************************ 00:20:16.392 16:16:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:16.650 * Looking for test storage... 00:20:16.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:16.650 16:16:17 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.650 16:16:17 -- nvmf/common.sh@7 -- # uname -s 00:20:16.650 16:16:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.650 16:16:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.650 16:16:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.650 16:16:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.650 16:16:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.650 16:16:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.650 16:16:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.650 16:16:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.650 16:16:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.650 16:16:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.650 16:16:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.650 16:16:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.650 16:16:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.650 16:16:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.650 16:16:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.650 16:16:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.650 16:16:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.650 16:16:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.650 16:16:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.650 16:16:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.650 16:16:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.650 16:16:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.650 16:16:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.650 16:16:17 -- paths/export.sh@5 -- # export PATH 00:20:16.650 16:16:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.650 16:16:17 -- nvmf/common.sh@47 -- # : 0 00:20:16.650 16:16:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.650 16:16:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.650 16:16:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.650 16:16:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.650 16:16:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.650 16:16:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.650 16:16:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.650 16:16:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.650 16:16:17 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:16.650 16:16:17 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:16.650 16:16:17 -- host/digest.sh@16 -- # runtime=2 00:20:16.650 16:16:17 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:16.650 16:16:17 -- host/digest.sh@138 -- # nvmftestinit 00:20:16.650 16:16:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:16.650 16:16:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.650 16:16:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:16.650 16:16:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:16.650 16:16:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:16.650 16:16:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.650 16:16:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.650 16:16:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.650 16:16:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:16.650 16:16:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:16.650 16:16:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.650 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.549 16:16:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:18.549 16:16:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.549 16:16:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.549 16:16:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.549 16:16:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.549 16:16:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.549 16:16:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.549 16:16:19 -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.549 16:16:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.549 16:16:19 -- nvmf/common.sh@296 -- # e810=() 00:20:18.549 16:16:19 -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.549 16:16:19 -- nvmf/common.sh@297 -- # x722=() 00:20:18.549 16:16:19 -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.549 16:16:19 -- nvmf/common.sh@298 -- # mlx=() 00:20:18.549 16:16:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.549 16:16:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.549 16:16:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.549 16:16:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.549 16:16:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.549 16:16:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.549 16:16:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:18.549 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:18.549 16:16:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.549 16:16:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.550 16:16:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:18.550 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:18.550 16:16:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.550 16:16:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.550 16:16:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.550 16:16:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:18.550 16:16:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.550 16:16:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:18.550 Found net devices under 0000:09:00.0: cvl_0_0 00:20:18.550 16:16:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.550 16:16:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.550 16:16:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.550 16:16:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:18.550 16:16:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.550 16:16:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:18.550 Found net devices under 0000:09:00.1: cvl_0_1 00:20:18.550 16:16:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.550 16:16:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:18.550 16:16:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:18.550 16:16:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:18.550 16:16:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:18.550 16:16:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.550 16:16:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.550 16:16:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.550 16:16:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.550 16:16:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.550 16:16:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.550 16:16:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.550 16:16:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.550 16:16:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.550 16:16:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.550 16:16:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.550 16:16:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.550 16:16:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.808 16:16:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.808 16:16:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.808 16:16:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.808 16:16:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.808 16:16:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.808 16:16:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.808 16:16:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:20:18.808 00:20:18.808 --- 10.0.0.2 ping statistics --- 00:20:18.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.808 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:18.808 16:16:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:20:18.809 00:20:18.809 --- 10.0.0.1 ping statistics --- 00:20:18.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.809 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:18.809 16:16:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.809 16:16:19 -- nvmf/common.sh@411 -- # return 0 00:20:18.809 16:16:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:18.809 16:16:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.809 16:16:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:18.809 16:16:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:18.809 16:16:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.809 16:16:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:18.809 16:16:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:18.809 16:16:19 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:18.809 16:16:19 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:18.809 16:16:19 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:18.809 16:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:18.809 16:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.809 16:16:19 -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 ************************************ 00:20:18.809 START TEST nvmf_digest_clean 00:20:18.809 ************************************ 00:20:18.809 16:16:20 -- common/autotest_common.sh@1111 -- # run_digest 00:20:18.809 16:16:20 -- host/digest.sh@120 -- # local dsa_initiator 00:20:18.809 16:16:20 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:18.809 16:16:20 -- host/digest.sh@121 -- # dsa_initiator=false 00:20:18.809 16:16:20 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:18.809 16:16:20 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:18.809 16:16:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:18.809 16:16:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:18.809 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 16:16:20 -- nvmf/common.sh@470 -- # nvmfpid=3469146 00:20:18.809 16:16:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:18.809 16:16:20 -- nvmf/common.sh@471 -- # waitforlisten 3469146 00:20:18.809 16:16:20 -- common/autotest_common.sh@817 -- # '[' -z 3469146 ']' 00:20:18.809 16:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.809 16:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:18.809 16:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.809 16:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:18.809 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:19.066 [2024-04-24 16:16:20.100145] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:19.066 [2024-04-24 16:16:20.100230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.066 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.066 [2024-04-24 16:16:20.169249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.066 [2024-04-24 16:16:20.286917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.066 [2024-04-24 16:16:20.286980] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.066 [2024-04-24 16:16:20.287008] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.066 [2024-04-24 16:16:20.287021] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.066 [2024-04-24 16:16:20.287033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.066 [2024-04-24 16:16:20.287077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.066 16:16:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:19.066 16:16:20 -- common/autotest_common.sh@850 -- # return 0 00:20:19.066 16:16:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:19.066 16:16:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:19.066 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 16:16:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.324 16:16:20 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:19.324 16:16:20 -- host/digest.sh@126 -- # common_target_config 00:20:19.324 16:16:20 -- host/digest.sh@43 -- # rpc_cmd 00:20:19.324 16:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.324 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 null0 00:20:19.324 [2024-04-24 16:16:20.479962] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.324 [2024-04-24 16:16:20.504164] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.324 16:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.324 16:16:20 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:19.324 16:16:20 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:19.324 16:16:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:19.324 16:16:20 -- host/digest.sh@80 -- # rw=randread 00:20:19.324 16:16:20 -- host/digest.sh@80 -- # bs=4096 00:20:19.324 16:16:20 -- host/digest.sh@80 -- # qd=128 00:20:19.324 16:16:20 -- host/digest.sh@80 -- # scan_dsa=false 00:20:19.324 16:16:20 -- host/digest.sh@83 -- # bperfpid=3469169 00:20:19.324 16:16:20 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:19.324 16:16:20 -- host/digest.sh@84 -- # waitforlisten 3469169 /var/tmp/bperf.sock 00:20:19.324 16:16:20 -- common/autotest_common.sh@817 -- # '[' -z 3469169 ']' 00:20:19.324 16:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:19.324 16:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.324 16:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:19.324 16:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.324 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 [2024-04-24 16:16:20.549761] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:19.324 [2024-04-24 16:16:20.549853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469169 ] 00:20:19.324 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.582 [2024-04-24 16:16:20.612535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.582 [2024-04-24 16:16:20.726739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.515 16:16:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.515 16:16:21 -- common/autotest_common.sh@850 -- # return 0 00:20:20.515 16:16:21 -- host/digest.sh@86 -- # false 00:20:20.515 16:16:21 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:20.515 16:16:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:20.773 16:16:21 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.773 16:16:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.030 nvme0n1 00:20:21.030 16:16:22 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:21.030 16:16:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:21.288 Running I/O for 2 seconds... 00:20:23.217 00:20:23.217 Latency(us) 00:20:23.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.217 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:23.217 nvme0n1 : 2.00 19238.73 75.15 0.00 0.00 6643.11 2997.67 22330.79 00:20:23.217 =================================================================================================================== 00:20:23.217 Total : 19238.73 75.15 0.00 0.00 6643.11 2997.67 22330.79 00:20:23.217 0 00:20:23.217 16:16:24 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:23.217 16:16:24 -- host/digest.sh@93 -- # get_accel_stats 00:20:23.217 16:16:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:23.217 16:16:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:23.217 | select(.opcode=="crc32c") 00:20:23.217 | "\(.module_name) \(.executed)"' 00:20:23.217 16:16:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:23.475 16:16:24 -- host/digest.sh@94 -- # false 00:20:23.475 16:16:24 -- host/digest.sh@94 -- # exp_module=software 00:20:23.475 16:16:24 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:23.475 16:16:24 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:23.475 16:16:24 -- host/digest.sh@98 -- # killprocess 3469169 00:20:23.475 16:16:24 -- common/autotest_common.sh@936 -- # '[' -z 3469169 ']' 00:20:23.475 16:16:24 -- common/autotest_common.sh@940 -- # kill -0 3469169 00:20:23.475 16:16:24 -- common/autotest_common.sh@941 -- # uname 00:20:23.475 16:16:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.475 16:16:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3469169 00:20:23.475 16:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:23.475 16:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:23.475 16:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3469169' 00:20:23.475 killing process with pid 3469169 00:20:23.475 16:16:24 -- common/autotest_common.sh@955 -- # kill 3469169 00:20:23.475 Received shutdown signal, test time was about 2.000000 seconds 00:20:23.475 00:20:23.475 Latency(us) 00:20:23.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.475 =================================================================================================================== 00:20:23.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.475 16:16:24 -- common/autotest_common.sh@960 -- # wait 3469169 00:20:23.733 16:16:24 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:23.733 16:16:24 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:23.733 16:16:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:23.733 16:16:24 -- host/digest.sh@80 -- # rw=randread 00:20:23.733 16:16:24 -- host/digest.sh@80 -- # bs=131072 00:20:23.733 16:16:24 -- host/digest.sh@80 -- # qd=16 00:20:23.733 16:16:24 -- host/digest.sh@80 -- # scan_dsa=false 00:20:23.733 16:16:24 -- host/digest.sh@83 -- # bperfpid=3469707 00:20:23.733 16:16:24 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:23.733 16:16:24 -- host/digest.sh@84 -- # waitforlisten 3469707 /var/tmp/bperf.sock 00:20:23.733 16:16:24 -- common/autotest_common.sh@817 -- # '[' -z 3469707 ']' 00:20:23.733 16:16:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:23.733 16:16:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:23.733 16:16:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:23.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:23.733 16:16:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:23.733 16:16:24 -- common/autotest_common.sh@10 -- # set +x 00:20:23.733 [2024-04-24 16:16:24.967704] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:23.733 [2024-04-24 16:16:24.967805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469707 ] 00:20:23.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:23.733 Zero copy mechanism will not be used. 00:20:23.733 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.991 [2024-04-24 16:16:25.029940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.991 [2024-04-24 16:16:25.143199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.991 16:16:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.991 16:16:25 -- common/autotest_common.sh@850 -- # return 0 00:20:23.991 16:16:25 -- host/digest.sh@86 -- # false 00:20:23.991 16:16:25 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:23.991 16:16:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:24.249 16:16:25 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:24.249 16:16:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:24.506 nvme0n1 00:20:24.765 16:16:25 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:24.765 16:16:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:24.765 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.765 Zero copy mechanism will not be used. 00:20:24.765 Running I/O for 2 seconds... 00:20:26.663 00:20:26.663 Latency(us) 00:20:26.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.663 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:26.663 nvme0n1 : 2.01 2057.64 257.21 0.00 0.00 7770.82 6310.87 18544.26 00:20:26.663 =================================================================================================================== 00:20:26.663 Total : 2057.64 257.21 0.00 0.00 7770.82 6310.87 18544.26 00:20:26.663 0 00:20:26.921 16:16:27 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:26.921 16:16:27 -- host/digest.sh@93 -- # get_accel_stats 00:20:26.921 16:16:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:26.921 16:16:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:26.921 16:16:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:26.921 | select(.opcode=="crc32c") 00:20:26.921 | "\(.module_name) \(.executed)"' 00:20:26.921 16:16:28 -- host/digest.sh@94 -- # false 00:20:26.921 16:16:28 -- host/digest.sh@94 -- # exp_module=software 00:20:26.921 16:16:28 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:26.921 16:16:28 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:26.921 16:16:28 -- host/digest.sh@98 -- # killprocess 3469707 00:20:26.921 16:16:28 -- common/autotest_common.sh@936 -- # '[' -z 3469707 ']' 00:20:26.921 16:16:28 -- common/autotest_common.sh@940 -- # kill -0 3469707 00:20:26.921 16:16:28 -- common/autotest_common.sh@941 -- # uname 00:20:26.921 16:16:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.921 16:16:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3469707 00:20:27.179 16:16:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:27.179 16:16:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:27.179 16:16:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3469707' 00:20:27.179 killing process with pid 3469707 00:20:27.179 16:16:28 -- common/autotest_common.sh@955 -- # kill 3469707 00:20:27.179 Received shutdown signal, test time was about 2.000000 seconds 00:20:27.179 00:20:27.179 Latency(us) 00:20:27.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.179 =================================================================================================================== 00:20:27.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.179 16:16:28 -- common/autotest_common.sh@960 -- # wait 3469707 00:20:27.436 16:16:28 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:27.436 16:16:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:27.436 16:16:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:27.436 16:16:28 -- host/digest.sh@80 -- # rw=randwrite 00:20:27.436 16:16:28 -- host/digest.sh@80 -- # bs=4096 00:20:27.436 16:16:28 -- host/digest.sh@80 -- # qd=128 00:20:27.436 16:16:28 -- host/digest.sh@80 -- # scan_dsa=false 00:20:27.436 16:16:28 -- host/digest.sh@83 -- # bperfpid=3470113 00:20:27.436 16:16:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:27.437 16:16:28 -- host/digest.sh@84 -- # waitforlisten 3470113 /var/tmp/bperf.sock 00:20:27.437 16:16:28 -- common/autotest_common.sh@817 -- # '[' -z 3470113 ']' 00:20:27.437 16:16:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:27.437 16:16:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.437 16:16:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:27.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:27.437 16:16:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.437 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:20:27.437 [2024-04-24 16:16:28.539942] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:27.437 [2024-04-24 16:16:28.540039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470113 ] 00:20:27.437 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.437 [2024-04-24 16:16:28.602815] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.437 [2024-04-24 16:16:28.721048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.367 16:16:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.367 16:16:29 -- common/autotest_common.sh@850 -- # return 0 00:20:28.367 16:16:29 -- host/digest.sh@86 -- # false 00:20:28.367 16:16:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:28.367 16:16:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:28.626 16:16:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:28.626 16:16:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:28.883 nvme0n1 00:20:28.883 16:16:30 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:28.883 16:16:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:29.141 Running I/O for 2 seconds... 00:20:31.040 00:20:31.040 Latency(us) 00:20:31.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.040 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:31.040 nvme0n1 : 2.01 20826.73 81.35 0.00 0.00 6139.49 2839.89 10000.31 00:20:31.040 =================================================================================================================== 00:20:31.040 Total : 20826.73 81.35 0.00 0.00 6139.49 2839.89 10000.31 00:20:31.040 0 00:20:31.040 16:16:32 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:31.040 16:16:32 -- host/digest.sh@93 -- # get_accel_stats 00:20:31.040 16:16:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:31.040 16:16:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:31.040 16:16:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:31.040 | select(.opcode=="crc32c") 00:20:31.040 | "\(.module_name) \(.executed)"' 00:20:31.298 16:16:32 -- host/digest.sh@94 -- # false 00:20:31.298 16:16:32 -- host/digest.sh@94 -- # exp_module=software 00:20:31.298 16:16:32 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:31.298 16:16:32 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:31.298 16:16:32 -- host/digest.sh@98 -- # killprocess 3470113 00:20:31.298 16:16:32 -- common/autotest_common.sh@936 -- # '[' -z 3470113 ']' 00:20:31.298 16:16:32 -- common/autotest_common.sh@940 -- # kill -0 3470113 00:20:31.298 16:16:32 -- common/autotest_common.sh@941 -- # uname 00:20:31.298 16:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.298 16:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3470113 00:20:31.298 16:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:31.298 16:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:31.298 16:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3470113' 00:20:31.298 killing process with pid 3470113 00:20:31.298 16:16:32 -- common/autotest_common.sh@955 -- # kill 3470113 00:20:31.298 Received shutdown signal, test time was about 2.000000 seconds 00:20:31.298 00:20:31.298 Latency(us) 00:20:31.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.298 =================================================================================================================== 00:20:31.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.299 16:16:32 -- common/autotest_common.sh@960 -- # wait 3470113 00:20:31.556 16:16:32 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:31.556 16:16:32 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:31.556 16:16:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:31.556 16:16:32 -- host/digest.sh@80 -- # rw=randwrite 00:20:31.556 16:16:32 -- host/digest.sh@80 -- # bs=131072 00:20:31.556 16:16:32 -- host/digest.sh@80 -- # qd=16 00:20:31.556 16:16:32 -- host/digest.sh@80 -- # scan_dsa=false 00:20:31.556 16:16:32 -- host/digest.sh@83 -- # bperfpid=3470650 00:20:31.556 16:16:32 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:31.556 16:16:32 -- host/digest.sh@84 -- # waitforlisten 3470650 /var/tmp/bperf.sock 00:20:31.556 16:16:32 -- common/autotest_common.sh@817 -- # '[' -z 3470650 ']' 00:20:31.556 16:16:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:31.557 16:16:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:31.557 16:16:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:31.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:31.557 16:16:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:31.557 16:16:32 -- common/autotest_common.sh@10 -- # set +x 00:20:31.815 [2024-04-24 16:16:32.851513] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:31.815 [2024-04-24 16:16:32.851581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470650 ] 00:20:31.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:31.815 Zero copy mechanism will not be used. 00:20:31.815 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.815 [2024-04-24 16:16:32.914338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.815 [2024-04-24 16:16:33.028984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.815 16:16:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:31.815 16:16:33 -- common/autotest_common.sh@850 -- # return 0 00:20:31.815 16:16:33 -- host/digest.sh@86 -- # false 00:20:31.815 16:16:33 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:31.815 16:16:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:32.383 16:16:33 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.383 16:16:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.641 nvme0n1 00:20:32.641 16:16:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:32.641 16:16:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:32.900 Zero copy mechanism will not be used. 00:20:32.900 Running I/O for 2 seconds... 00:20:34.807 00:20:34.807 Latency(us) 00:20:34.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.807 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:34.807 nvme0n1 : 2.01 2125.86 265.73 0.00 0.00 7507.74 5946.79 14854.83 00:20:34.807 =================================================================================================================== 00:20:34.807 Total : 2125.86 265.73 0.00 0.00 7507.74 5946.79 14854.83 00:20:34.807 0 00:20:34.807 16:16:36 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:34.807 16:16:36 -- host/digest.sh@93 -- # get_accel_stats 00:20:34.807 16:16:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:34.807 16:16:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:34.807 16:16:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:34.807 | select(.opcode=="crc32c") 00:20:34.807 | "\(.module_name) \(.executed)"' 00:20:35.083 16:16:36 -- host/digest.sh@94 -- # false 00:20:35.083 16:16:36 -- host/digest.sh@94 -- # exp_module=software 00:20:35.083 16:16:36 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:35.083 16:16:36 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:35.083 16:16:36 -- host/digest.sh@98 -- # killprocess 3470650 00:20:35.083 16:16:36 -- common/autotest_common.sh@936 -- # '[' -z 3470650 ']' 00:20:35.083 16:16:36 -- common/autotest_common.sh@940 -- # kill -0 3470650 00:20:35.083 16:16:36 -- common/autotest_common.sh@941 -- # uname 00:20:35.083 16:16:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.083 16:16:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3470650 00:20:35.083 16:16:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:35.083 16:16:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:35.083 16:16:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3470650' 00:20:35.083 killing process with pid 3470650 00:20:35.083 16:16:36 -- common/autotest_common.sh@955 -- # kill 3470650 00:20:35.083 Received shutdown signal, test time was about 2.000000 seconds 00:20:35.083 00:20:35.083 Latency(us) 00:20:35.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.083 =================================================================================================================== 00:20:35.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.083 16:16:36 -- common/autotest_common.sh@960 -- # wait 3470650 00:20:35.385 16:16:36 -- host/digest.sh@132 -- # killprocess 3469146 00:20:35.385 16:16:36 -- common/autotest_common.sh@936 -- # '[' -z 3469146 ']' 00:20:35.385 16:16:36 -- common/autotest_common.sh@940 -- # kill -0 3469146 00:20:35.385 16:16:36 -- common/autotest_common.sh@941 -- # uname 00:20:35.385 16:16:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.385 16:16:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3469146 00:20:35.385 16:16:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:35.385 16:16:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:35.385 16:16:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3469146' 00:20:35.385 killing process with pid 3469146 00:20:35.385 16:16:36 -- common/autotest_common.sh@955 -- # kill 3469146 00:20:35.385 16:16:36 -- common/autotest_common.sh@960 -- # wait 3469146 00:20:35.667 00:20:35.667 real 0m16.798s 00:20:35.667 user 0m32.775s 00:20:35.667 sys 0m3.919s 00:20:35.667 16:16:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.667 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:35.667 ************************************ 00:20:35.667 END TEST nvmf_digest_clean 00:20:35.667 ************************************ 00:20:35.667 16:16:36 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:35.667 16:16:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:35.667 16:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.667 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:35.927 ************************************ 00:20:35.927 START TEST nvmf_digest_error 00:20:35.927 ************************************ 00:20:35.927 16:16:36 -- common/autotest_common.sh@1111 -- # run_digest_error 00:20:35.927 16:16:36 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:35.927 16:16:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.927 16:16:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.927 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:35.927 16:16:36 -- nvmf/common.sh@470 -- # nvmfpid=3471223 00:20:35.927 16:16:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:35.927 16:16:36 -- nvmf/common.sh@471 -- # waitforlisten 3471223 00:20:35.927 16:16:36 -- common/autotest_common.sh@817 -- # '[' -z 3471223 ']' 00:20:35.927 16:16:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.927 16:16:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.927 16:16:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.927 16:16:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.927 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:35.927 [2024-04-24 16:16:37.025285] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:35.927 [2024-04-24 16:16:37.025365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.927 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.927 [2024-04-24 16:16:37.089962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.927 [2024-04-24 16:16:37.194357] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.927 [2024-04-24 16:16:37.194421] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.927 [2024-04-24 16:16:37.194435] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.927 [2024-04-24 16:16:37.194447] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.927 [2024-04-24 16:16:37.194458] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.927 [2024-04-24 16:16:37.194491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.185 16:16:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.185 16:16:37 -- common/autotest_common.sh@850 -- # return 0 00:20:36.185 16:16:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.185 16:16:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.185 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:36.185 16:16:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.185 16:16:37 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:36.185 16:16:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.185 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:36.185 [2024-04-24 16:16:37.259088] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:36.185 16:16:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.185 16:16:37 -- host/digest.sh@105 -- # common_target_config 00:20:36.185 16:16:37 -- host/digest.sh@43 -- # rpc_cmd 00:20:36.185 16:16:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.185 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:36.185 null0 00:20:36.185 [2024-04-24 16:16:37.381851] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.185 [2024-04-24 16:16:37.406081] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.185 16:16:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.185 16:16:37 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:36.185 16:16:37 -- host/digest.sh@54 -- # local rw bs qd 00:20:36.185 16:16:37 -- host/digest.sh@56 -- # rw=randread 00:20:36.185 16:16:37 -- host/digest.sh@56 -- # bs=4096 00:20:36.185 16:16:37 -- host/digest.sh@56 -- # qd=128 00:20:36.185 16:16:37 -- host/digest.sh@58 -- # bperfpid=3471247 00:20:36.185 16:16:37 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:36.185 16:16:37 -- host/digest.sh@60 -- # waitforlisten 3471247 /var/tmp/bperf.sock 00:20:36.185 16:16:37 -- common/autotest_common.sh@817 -- # '[' -z 3471247 ']' 00:20:36.185 16:16:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:36.185 16:16:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.185 16:16:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:36.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:36.185 16:16:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.185 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:36.185 [2024-04-24 16:16:37.451368] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:36.185 [2024-04-24 16:16:37.451442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471247 ] 00:20:36.443 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.443 [2024-04-24 16:16:37.512403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.443 [2024-04-24 16:16:37.626582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.701 16:16:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.701 16:16:37 -- common/autotest_common.sh@850 -- # return 0 00:20:36.701 16:16:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:36.701 16:16:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:36.960 16:16:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:36.960 16:16:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.960 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:36.960 16:16:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.960 16:16:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:36.960 16:16:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:37.218 nvme0n1 00:20:37.218 16:16:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:37.218 16:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.218 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:37.218 16:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.218 16:16:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:37.218 16:16:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:37.477 Running I/O for 2 seconds... 00:20:37.477 [2024-04-24 16:16:38.577925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.577971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.577995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.590248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.590283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.605577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.605613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.605632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.619884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.619921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.619940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.632205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.632239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.632258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.647598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.647632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.647651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.660974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.661022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.676515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.676569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.688575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.688608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.688628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.703145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.703180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.717996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.718026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.718042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.733910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.733960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.748410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.748444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.748469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.477 [2024-04-24 16:16:38.760110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.477 [2024-04-24 16:16:38.760155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.477 [2024-04-24 16:16:38.760172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.774820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.774866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.774884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.788974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.789018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.789036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.804303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.804338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.804358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.816376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.816411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.816430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.831894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.831942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.846674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.846711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.846731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.859531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.859566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.859586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.873956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.873987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.874005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.887878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.887909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.887926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.902453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.902487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.916314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.916348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.930820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.930851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.930869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.944646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.944680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.944699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.957122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.957168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.781 [2024-04-24 16:16:38.972190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.781 [2024-04-24 16:16:38.972225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.781 [2024-04-24 16:16:38.972252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:38.986645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:38.986679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:38.986697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:39.001818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:39.001847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:39.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:39.014575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:39.014608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:39.014627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:39.031009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:39.031040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:39.031073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:39.045378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:39.045417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:39.045438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.782 [2024-04-24 16:16:39.057373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:37.782 [2024-04-24 16:16:39.057406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.782 [2024-04-24 16:16:39.057425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.074003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.074047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.074063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.085119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.085172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.101119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.101161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.101181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.116166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.116198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.116215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.128781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.128835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.128852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.144879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.144911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.144928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.159422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.159456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.159475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.172065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.040 [2024-04-24 16:16:39.172097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.040 [2024-04-24 16:16:39.172115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.040 [2024-04-24 16:16:39.186761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.186792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.186809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.201358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.201389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.201406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.214451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.214480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.214514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.228439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.228484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.228501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.240058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.240102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.240118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.252778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.252808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.252824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.267769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.267799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.267816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.278967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.278997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.279014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.292861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.292891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.292907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.304111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.304139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.041 [2024-04-24 16:16:39.319217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.041 [2024-04-24 16:16:39.319247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.041 [2024-04-24 16:16:39.319280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.333011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.333042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.347080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.347110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.347143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.360492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.360525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.360543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.372209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.372241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.372258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.387694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.387724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.387750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.399018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.399049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.399066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.411399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.411444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.411460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.426389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.426450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.440192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.440223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.451127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.451155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.451186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.466646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.466695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.481312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.300 [2024-04-24 16:16:39.481343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.300 [2024-04-24 16:16:39.481361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.300 [2024-04-24 16:16:39.493446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.493476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.493494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.505657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.505687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.505704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.520679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.520710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.532319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.532347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.532378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.545231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.545261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.545278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.558878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.558908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.558933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.572503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.572534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.572551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.301 [2024-04-24 16:16:39.585279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.301 [2024-04-24 16:16:39.585315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.301 [2024-04-24 16:16:39.585333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.596892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.596938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.596956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.611513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.611541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.611572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.623131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.623163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.623196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.636820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.636851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.636868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.650078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.650108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.650126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.664592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.664620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.664652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.677691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.677730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.677758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.690935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.690966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.690983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.703113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.703143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.703160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.715596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.715623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.715654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.728808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.728839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.728856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.742901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.742931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.742948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.753712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.753749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.753768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.768869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.768899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.768917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.781131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.781162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.781179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.793397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.793428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.793459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.808248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.808292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.808308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.821254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.821289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.821306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.559 [2024-04-24 16:16:39.834321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.559 [2024-04-24 16:16:39.834351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.559 [2024-04-24 16:16:39.834372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.846177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.846209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.846242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.860071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.860102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.860119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.873970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.874001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.874018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.885730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.899488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.899516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.899555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.913614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.927161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.927195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.927213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.941138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.941172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.941191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.953555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.953590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.968848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.968879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.968896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.980543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.980586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.980605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:39.996716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:39.996794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:39.996814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.009983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.010023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.010043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.023425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.023471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.023510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.037566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.037626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.037646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.050410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.050451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.050470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.065354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.065390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.080499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.080530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.080547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.092489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.092525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.092545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.109380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.109416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.109436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.124610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.124645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.124667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.138044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.138082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.138112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.154443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.154478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.944 [2024-04-24 16:16:40.154497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.944 [2024-04-24 16:16:40.167386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.944 [2024-04-24 16:16:40.167422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.945 [2024-04-24 16:16:40.167441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.945 [2024-04-24 16:16:40.181235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.945 [2024-04-24 16:16:40.181270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.945 [2024-04-24 16:16:40.181290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.945 [2024-04-24 16:16:40.196564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.945 [2024-04-24 16:16:40.196599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.945 [2024-04-24 16:16:40.196619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.945 [2024-04-24 16:16:40.208838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.945 [2024-04-24 16:16:40.208869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.945 [2024-04-24 16:16:40.208887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.945 [2024-04-24 16:16:40.224078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:38.945 [2024-04-24 16:16:40.224112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.945 [2024-04-24 16:16:40.224131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.236922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.236969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.236988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.254146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.254181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.254201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.268605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.268645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.268665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.281128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.281162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.281181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.295969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.295998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.296029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.311441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.311476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.311495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.325453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.325488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.325507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.338158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.201 [2024-04-24 16:16:40.338203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.201 [2024-04-24 16:16:40.338223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.201 [2024-04-24 16:16:40.353720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.353761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.368364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.368398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.368417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.382372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.382427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.398672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.398709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.398729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.412104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.412140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.412160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.425245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.425299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.439418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.439453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.439472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.455061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.455110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.455130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.466974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.467020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.467037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.202 [2024-04-24 16:16:40.483091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.202 [2024-04-24 16:16:40.483126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.202 [2024-04-24 16:16:40.483146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 [2024-04-24 16:16:40.498695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.459 [2024-04-24 16:16:40.498732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.459 [2024-04-24 16:16:40.498762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 [2024-04-24 16:16:40.510885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.459 [2024-04-24 16:16:40.510915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.459 [2024-04-24 16:16:40.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 [2024-04-24 16:16:40.526955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.459 [2024-04-24 16:16:40.526984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.459 [2024-04-24 16:16:40.527000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 [2024-04-24 16:16:40.542448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.459 [2024-04-24 16:16:40.542488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.459 [2024-04-24 16:16:40.542508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 [2024-04-24 16:16:40.555163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19f49c0) 00:20:39.459 [2024-04-24 16:16:40.555198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.459 [2024-04-24 16:16:40.555219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.459 00:20:39.459 Latency(us) 00:20:39.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:39.459 nvme0n1 : 2.01 18396.23 71.86 0.00 0.00 6950.42 3325.35 18544.26 00:20:39.459 =================================================================================================================== 00:20:39.459 Total : 18396.23 71.86 0.00 0.00 6950.42 3325.35 18544.26 00:20:39.459 0 00:20:39.459 16:16:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:39.459 16:16:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:39.459 16:16:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:39.459 | .driver_specific 00:20:39.459 | .nvme_error 00:20:39.459 | .status_code 00:20:39.459 | .command_transient_transport_error' 00:20:39.459 16:16:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:39.718 16:16:40 -- host/digest.sh@71 -- # (( 144 > 0 )) 00:20:39.718 16:16:40 -- host/digest.sh@73 -- # killprocess 3471247 00:20:39.718 16:16:40 -- common/autotest_common.sh@936 -- # '[' -z 3471247 ']' 00:20:39.718 16:16:40 -- common/autotest_common.sh@940 -- # kill -0 3471247 00:20:39.718 16:16:40 -- common/autotest_common.sh@941 -- # uname 00:20:39.718 16:16:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.718 16:16:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3471247 00:20:39.718 16:16:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.718 16:16:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.718 16:16:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3471247' 00:20:39.718 killing process with pid 3471247 00:20:39.718 16:16:40 -- common/autotest_common.sh@955 -- # kill 3471247 00:20:39.718 Received shutdown signal, test time was about 2.000000 seconds 00:20:39.718 00:20:39.718 Latency(us) 00:20:39.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.718 =================================================================================================================== 00:20:39.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.718 16:16:40 -- common/autotest_common.sh@960 -- # wait 3471247 00:20:39.976 16:16:41 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:39.976 16:16:41 -- host/digest.sh@54 -- # local rw bs qd 00:20:39.976 16:16:41 -- host/digest.sh@56 -- # rw=randread 00:20:39.976 16:16:41 -- host/digest.sh@56 -- # bs=131072 00:20:39.976 16:16:41 -- host/digest.sh@56 -- # qd=16 00:20:39.976 16:16:41 -- host/digest.sh@58 -- # bperfpid=3471668 00:20:39.976 16:16:41 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:39.976 16:16:41 -- host/digest.sh@60 -- # waitforlisten 3471668 /var/tmp/bperf.sock 00:20:39.976 16:16:41 -- common/autotest_common.sh@817 -- # '[' -z 3471668 ']' 00:20:39.976 16:16:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:39.976 16:16:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:39.976 16:16:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:39.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:39.976 16:16:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:39.976 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.976 [2024-04-24 16:16:41.153107] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:39.976 [2024-04-24 16:16:41.153194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471668 ] 00:20:39.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:39.976 Zero copy mechanism will not be used. 00:20:39.976 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.976 [2024-04-24 16:16:41.221674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.233 [2024-04-24 16:16:41.335698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.233 16:16:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.233 16:16:41 -- common/autotest_common.sh@850 -- # return 0 00:20:40.233 16:16:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:40.233 16:16:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:40.490 16:16:41 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:40.490 16:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.490 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:20:40.490 16:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.490 16:16:41 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:40.490 16:16:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.055 nvme0n1 00:20:41.055 16:16:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:41.055 16:16:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.055 16:16:42 -- common/autotest_common.sh@10 -- # set +x 00:20:41.055 16:16:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.055 16:16:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:41.055 16:16:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:41.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:41.055 Zero copy mechanism will not be used. 00:20:41.055 Running I/O for 2 seconds... 00:20:41.055 [2024-04-24 16:16:42.268041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.268110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.268133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.277138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.277184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.277204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.286187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.286222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.286245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.295657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.295690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.295709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.304605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.304639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.304658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.314515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.314549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.314567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.324548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.324582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.324601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.055 [2024-04-24 16:16:42.334609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.055 [2024-04-24 16:16:42.334645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.055 [2024-04-24 16:16:42.334664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.345040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.345091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.345112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.356174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.356210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.356230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.367667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.367702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.367723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.376871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.376908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.376925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.387843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.387899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.397841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.397886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.397903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.407505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.407538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.407557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.416424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.416457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.416475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.425357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.425388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.425407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.435279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.435312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.435330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.444248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.444288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.444307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.454252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.454286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.454305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.463226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.463260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.463279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.474558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.474592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.474611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.484493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.484527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.484545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.495293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.495326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.495345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.504128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.504180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.312 [2024-04-24 16:16:42.513111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.312 [2024-04-24 16:16:42.513144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.312 [2024-04-24 16:16:42.513163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.523071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.523118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.523137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.533083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.533132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.533153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.544028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.544077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.544097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.553126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.553160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.553179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.562184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.562218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.562238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.572036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.572083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.572102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.580880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.580909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.313 [2024-04-24 16:16:42.589855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.313 [2024-04-24 16:16:42.589884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.313 [2024-04-24 16:16:42.589901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.571 [2024-04-24 16:16:42.598801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.571 [2024-04-24 16:16:42.598833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.571 [2024-04-24 16:16:42.598850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.571 [2024-04-24 16:16:42.608508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.571 [2024-04-24 16:16:42.608543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.571 [2024-04-24 16:16:42.608568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.571 [2024-04-24 16:16:42.617432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.571 [2024-04-24 16:16:42.617466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.571 [2024-04-24 16:16:42.617485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.571 [2024-04-24 16:16:42.625887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.571 [2024-04-24 16:16:42.625931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.571 [2024-04-24 16:16:42.625949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.571 [2024-04-24 16:16:42.634885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.571 [2024-04-24 16:16:42.634914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.571 [2024-04-24 16:16:42.634930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.643754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.643800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.643817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.652672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.652706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.652725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.662558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.662592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.662611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.672436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.672468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.672487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.682381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.682414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.691134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.691176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.691195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.700033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.700080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.708996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.709026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.709042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.717760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.717820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.726606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.726640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.726659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.735362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.735392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.735409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.744718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.744765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.744802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.753637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.753670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.753688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.762516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.762548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.762567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.772516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.772549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.772567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.781648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.781681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.781699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.790578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.790614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.790633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.800293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.800326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.800345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.809136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.809169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.809188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.818870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.818899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.828677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.828707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.828726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.838534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.838568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.838586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.572 [2024-04-24 16:16:42.847291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.572 [2024-04-24 16:16:42.847323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.572 [2024-04-24 16:16:42.847348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.831 [2024-04-24 16:16:42.857217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.831 [2024-04-24 16:16:42.857252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.831 [2024-04-24 16:16:42.857272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.831 [2024-04-24 16:16:42.867046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.831 [2024-04-24 16:16:42.867075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.831 [2024-04-24 16:16:42.867092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.831 [2024-04-24 16:16:42.877267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.831 [2024-04-24 16:16:42.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.831 [2024-04-24 16:16:42.877319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.831 [2024-04-24 16:16:42.887300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.831 [2024-04-24 16:16:42.887333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.831 [2024-04-24 16:16:42.887352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.897153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.897185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.897203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.907078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.907144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.916157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.916188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.916207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.925065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.925111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.925129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.933972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.934001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.934033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.942703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.942735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.942764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.952158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.952192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.952210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.961105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.961131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.961147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.970819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.970848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.970864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.980223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.980257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.980276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:42.991201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:42.991235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:42.991253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.001096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.001149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.010258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.010289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.010314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.019179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.019212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.019230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.028143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.028175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.028193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.037936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.037964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.037981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.046894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.046927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.046946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.056921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.056951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.066806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.066834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.066850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.076636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.076668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.076686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.085605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.085637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.085656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.094454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.094492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.094511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.104434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.832 [2024-04-24 16:16:43.115402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:41.832 [2024-04-24 16:16:43.115436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.832 [2024-04-24 16:16:43.115455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.090 [2024-04-24 16:16:43.125135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.090 [2024-04-24 16:16:43.125169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.090 [2024-04-24 16:16:43.125188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.090 [2024-04-24 16:16:43.135561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.090 [2024-04-24 16:16:43.135594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.090 [2024-04-24 16:16:43.135613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.090 [2024-04-24 16:16:43.144793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.144822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.155796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.155824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.155840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.165582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.165613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.165632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.174517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.174549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.174567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.183509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.183541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.183559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.192342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.192373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.192391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.202087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.202114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.202130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.210910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.210939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.210955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.220883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.220912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.220929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.229852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.229880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.229896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.239565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.239597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.248461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.248492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.258244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.258275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.267114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.267146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.267164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.277896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.277925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.277942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.287825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.287852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.287869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.296660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.296692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.296710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.306624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.306660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.306679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.316522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.316556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.316575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.326497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.326529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.326548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.337121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.337172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.345982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.346029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.346046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.355124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.355156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.355176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.365001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.365048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.091 [2024-04-24 16:16:43.374660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.091 [2024-04-24 16:16:43.374695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.091 [2024-04-24 16:16:43.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.383537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.383590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.392428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.392479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.402127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.402171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.402190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.412069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.412101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.412120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.422039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.422089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.430909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.430937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.430953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.441645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.441678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.441697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.450644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.450676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.450694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.459637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.459686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.468550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.468582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.468601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.478512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.478544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.478563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.488302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.488334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.488352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.497163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.497194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.497213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.506061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.506113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.506133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.350 [2024-04-24 16:16:43.514899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.350 [2024-04-24 16:16:43.514928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.350 [2024-04-24 16:16:43.514945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.523970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.523999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.524016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.532882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.532911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.532928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.542865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.542894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.542910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.551916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.551944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.551960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.561065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.561131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.570103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.570132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.570163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.580121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.580169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.580189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.589126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.589159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.589178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.598899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.598942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.598959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.607796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.607825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.607841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.617658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.617690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.351 [2024-04-24 16:16:43.627566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.351 [2024-04-24 16:16:43.627600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.351 [2024-04-24 16:16:43.627619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.637394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.637428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.647264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.647297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.647315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.657183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.657216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.657235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.666244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.666301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.676137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.676180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.676197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.685224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.685255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.685273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.695173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.695206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.695224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.705187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.705237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.714168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.714200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.724140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.724184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.724203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.733129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.733179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.743889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.743918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.753236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.753274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.753293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.765226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.765259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.778482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.778536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.789304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.789338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.789357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.800346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.800378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.800397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.812430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.812483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.825430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.825466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.825486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.838408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.838442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.838461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.850646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.850700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.862468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.609 [2024-04-24 16:16:43.862502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.609 [2024-04-24 16:16:43.862520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.609 [2024-04-24 16:16:43.873515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.610 [2024-04-24 16:16:43.873550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.610 [2024-04-24 16:16:43.873569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.610 [2024-04-24 16:16:43.885589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.610 [2024-04-24 16:16:43.885624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.610 [2024-04-24 16:16:43.885644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.897144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.897179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.897197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.909529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.909564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.909584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.923383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.923417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.923436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.936570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.936605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.936623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.948383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.948417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.948436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.961861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.961930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.973916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.973946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.973963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.986288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.986322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.986341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:43.999471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:43.999507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:43.999526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.012534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.012569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.012587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.026894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.026941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.039198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.039232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.039252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.052636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.052671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.052690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.063961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.064006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.064024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.075123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.075179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.086830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.867 [2024-04-24 16:16:44.086863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.867 [2024-04-24 16:16:44.086880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.867 [2024-04-24 16:16:44.097226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.868 [2024-04-24 16:16:44.097261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.868 [2024-04-24 16:16:44.097280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.868 [2024-04-24 16:16:44.107088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.868 [2024-04-24 16:16:44.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.868 [2024-04-24 16:16:44.107143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.868 [2024-04-24 16:16:44.118774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.868 [2024-04-24 16:16:44.118819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.868 [2024-04-24 16:16:44.118835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.868 [2024-04-24 16:16:44.131564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.868 [2024-04-24 16:16:44.131599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.868 [2024-04-24 16:16:44.131618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.868 [2024-04-24 16:16:44.143157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:42.868 [2024-04-24 16:16:44.143192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.868 [2024-04-24 16:16:44.143212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.155167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.155213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.155231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.166237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.166272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.166298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.176969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.177014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.177031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.188924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.188953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.188984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.201171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.201205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.201224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.212701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.212735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.212762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.223988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.224017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.224048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.234728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.234784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.234802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.247048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.247096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.247115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.126 [2024-04-24 16:16:44.257617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb14d0) 00:20:43.126 [2024-04-24 16:16:44.257651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.126 [2024-04-24 16:16:44.257670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.126 00:20:43.126 Latency(us) 00:20:43.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.126 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:43.126 nvme0n1 : 2.00 3085.93 385.74 0.00 0.00 5180.65 1686.95 14369.37 00:20:43.126 =================================================================================================================== 00:20:43.126 Total : 3085.93 385.74 0.00 0.00 5180.65 1686.95 14369.37 00:20:43.126 0 00:20:43.126 16:16:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:43.126 16:16:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:43.126 16:16:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:43.126 16:16:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:43.126 | .driver_specific 00:20:43.126 | .nvme_error 00:20:43.126 | .status_code 00:20:43.126 | .command_transient_transport_error' 00:20:43.384 16:16:44 -- host/digest.sh@71 -- # (( 199 > 0 )) 00:20:43.384 16:16:44 -- host/digest.sh@73 -- # killprocess 3471668 00:20:43.384 16:16:44 -- common/autotest_common.sh@936 -- # '[' -z 3471668 ']' 00:20:43.384 16:16:44 -- common/autotest_common.sh@940 -- # kill -0 3471668 00:20:43.384 16:16:44 -- common/autotest_common.sh@941 -- # uname 00:20:43.384 16:16:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.384 16:16:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3471668 00:20:43.384 16:16:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:43.384 16:16:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:43.384 16:16:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3471668' 00:20:43.384 killing process with pid 3471668 00:20:43.384 16:16:44 -- common/autotest_common.sh@955 -- # kill 3471668 00:20:43.384 Received shutdown signal, test time was about 2.000000 seconds 00:20:43.384 00:20:43.384 Latency(us) 00:20:43.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.384 =================================================================================================================== 00:20:43.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.384 16:16:44 -- common/autotest_common.sh@960 -- # wait 3471668 00:20:43.641 16:16:44 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:43.641 16:16:44 -- host/digest.sh@54 -- # local rw bs qd 00:20:43.641 16:16:44 -- host/digest.sh@56 -- # rw=randwrite 00:20:43.641 16:16:44 -- host/digest.sh@56 -- # bs=4096 00:20:43.641 16:16:44 -- host/digest.sh@56 -- # qd=128 00:20:43.641 16:16:44 -- host/digest.sh@58 -- # bperfpid=3472180 00:20:43.642 16:16:44 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:43.642 16:16:44 -- host/digest.sh@60 -- # waitforlisten 3472180 /var/tmp/bperf.sock 00:20:43.642 16:16:44 -- common/autotest_common.sh@817 -- # '[' -z 3472180 ']' 00:20:43.642 16:16:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:43.642 16:16:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.642 16:16:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:43.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:43.642 16:16:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.642 16:16:44 -- common/autotest_common.sh@10 -- # set +x 00:20:43.642 [2024-04-24 16:16:44.844712] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:43.642 [2024-04-24 16:16:44.844832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472180 ] 00:20:43.642 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.642 [2024-04-24 16:16:44.903176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.900 [2024-04-24 16:16:45.008876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.900 16:16:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.900 16:16:45 -- common/autotest_common.sh@850 -- # return 0 00:20:43.900 16:16:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.900 16:16:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:44.158 16:16:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:44.158 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.158 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:44.158 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.158 16:16:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.158 16:16:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.723 nvme0n1 00:20:44.723 16:16:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:44.723 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.723 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:44.723 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.723 16:16:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:44.723 16:16:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.723 Running I/O for 2 seconds... 00:20:44.723 [2024-04-24 16:16:45.899225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ed920 00:20:44.723 [2024-04-24 16:16:45.900394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.900434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.912583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eea00 00:20:44.723 [2024-04-24 16:16:45.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.913763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.924914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:44.723 [2024-04-24 16:16:45.926040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.926067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.938696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e49b0 00:20:44.723 [2024-04-24 16:16:45.940070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.940098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.952459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ec840 00:20:44.723 [2024-04-24 16:16:45.953975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.954003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.964662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e7818 00:20:44.723 [2024-04-24 16:16:45.965627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.965671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.977815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f2948 00:20:44.723 [2024-04-24 16:16:45.978573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.978604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:45.991335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ed920 00:20:44.723 [2024-04-24 16:16:45.992303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:45.992334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:44.723 [2024-04-24 16:16:46.004978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e7c50 00:20:44.723 [2024-04-24 16:16:46.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.723 [2024-04-24 16:16:46.006178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:44.981 [2024-04-24 16:16:46.019909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ea248 00:20:44.981 [2024-04-24 16:16:46.022164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.981 [2024-04-24 16:16:46.022197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:44.981 [2024-04-24 16:16:46.029092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eaab8 00:20:44.981 [2024-04-24 16:16:46.030142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.981 [2024-04-24 16:16:46.030173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.981 [2024-04-24 16:16:46.041486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ee5c8 00:20:44.981 [2024-04-24 16:16:46.042453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.981 [2024-04-24 16:16:46.042484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.981 [2024-04-24 16:16:46.055278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f92c0 00:20:44.981 [2024-04-24 16:16:46.056393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.981 [2024-04-24 16:16:46.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.069687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ef270 00:20:44.982 [2024-04-24 16:16:46.070639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.070670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.083228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f6020 00:20:44.982 [2024-04-24 16:16:46.084380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.084410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.098050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f8618 00:20:44.982 [2024-04-24 16:16:46.100189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.100220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.107221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:44.982 [2024-04-24 16:16:46.108150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.108180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.120750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f20d8 00:20:44.982 [2024-04-24 16:16:46.121938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.121964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.133121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e23b8 00:20:44.982 [2024-04-24 16:16:46.134265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.134296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.146701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ef270 00:20:44.982 [2024-04-24 16:16:46.148065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.148096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.160321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eff18 00:20:44.982 [2024-04-24 16:16:46.161817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.161846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.174071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eea00 00:20:44.982 [2024-04-24 16:16:46.175756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.175803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.187700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e23b8 00:20:44.982 [2024-04-24 16:16:46.189562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.189594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.201325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e49b0 00:20:44.982 [2024-04-24 16:16:46.203302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.203333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.214912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f4f40 00:20:44.982 [2024-04-24 16:16:46.217134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.217165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.224062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f6890 00:20:44.982 [2024-04-24 16:16:46.225090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.225120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.236353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f5378 00:20:44.982 [2024-04-24 16:16:46.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.237325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.250106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e9e10 00:20:44.982 [2024-04-24 16:16:46.251233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.251263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.982 [2024-04-24 16:16:46.263793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190de038 00:20:44.982 [2024-04-24 16:16:46.265081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.982 [2024-04-24 16:16:46.265110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.277384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190efae0 00:20:45.243 [2024-04-24 16:16:46.278897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.278926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.291098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f3e60 00:20:45.243 [2024-04-24 16:16:46.292765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.292819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.304174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e9e10 00:20:45.243 [2024-04-24 16:16:46.305854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.305889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.316637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e3d08 00:20:45.243 [2024-04-24 16:16:46.318493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.318522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.329168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ff3c8 00:20:45.243 [2024-04-24 16:16:46.331213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.331241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.337680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ed4e8 00:20:45.243 [2024-04-24 16:16:46.338554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.338581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.349038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ebfd0 00:20:45.243 [2024-04-24 16:16:46.349891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.349919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.361379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e23b8 00:20:45.243 [2024-04-24 16:16:46.362479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.362506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.373901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ef270 00:20:45.243 [2024-04-24 16:16:46.375072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.386458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eff18 00:20:45.243 [2024-04-24 16:16:46.387858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.387886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.398828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f46d0 00:20:45.243 [2024-04-24 16:16:46.400300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.400326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.411335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e23b8 00:20:45.243 [2024-04-24 16:16:46.413052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.413081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.422943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e49b0 00:20:45.243 [2024-04-24 16:16:46.424307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.424338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.434691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ea680 00:20:45.243 [2024-04-24 16:16:46.435755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.435785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.445658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f20d8 00:20:45.243 [2024-04-24 16:16:46.447587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.447615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.456185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e4578 00:20:45.243 [2024-04-24 16:16:46.457058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.457084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.468546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e1710 00:20:45.243 [2024-04-24 16:16:46.469564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.469590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.480994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e3060 00:20:45.243 [2024-04-24 16:16:46.482241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.482268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.493507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190fd640 00:20:45.243 [2024-04-24 16:16:46.494917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.505987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f57b0 00:20:45.243 [2024-04-24 16:16:46.507460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.507486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.243 [2024-04-24 16:16:46.518472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e1710 00:20:45.243 [2024-04-24 16:16:46.520112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.243 [2024-04-24 16:16:46.520139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.530887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ddc00 00:20:45.503 [2024-04-24 16:16:46.532665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.532694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.543291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ec408 00:20:45.503 [2024-04-24 16:16:46.545348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.545376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.551825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f31b8 00:20:45.503 [2024-04-24 16:16:46.552758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.552786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.564089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f1430 00:20:45.503 [2024-04-24 16:16:46.565076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.565104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.576118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f5378 00:20:45.503 [2024-04-24 16:16:46.577083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.577110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.588222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f8618 00:20:45.503 [2024-04-24 16:16:46.589114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.600224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e5220 00:20:45.503 [2024-04-24 16:16:46.601112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.611379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e5ec8 00:20:45.503 [2024-04-24 16:16:46.612313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.612346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.623914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f46d0 00:20:45.503 [2024-04-24 16:16:46.624996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.625023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.636288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e95a0 00:20:45.503 [2024-04-24 16:16:46.637558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.637585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.648853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190fd208 00:20:45.503 [2024-04-24 16:16:46.650210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.650236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.661339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e0ea0 00:20:45.503 [2024-04-24 16:16:46.662809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.662837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.673717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f46d0 00:20:45.503 [2024-04-24 16:16:46.675444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.675474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.684796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f6020 00:20:45.503 [2024-04-24 16:16:46.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.686027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.696647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e1f80 00:20:45.503 [2024-04-24 16:16:46.697879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.697908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.708904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ea680 00:20:45.503 [2024-04-24 16:16:46.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.710048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.719918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f7538 00:20:45.503 [2024-04-24 16:16:46.721720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.721757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.730366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190edd58 00:20:45.503 [2024-04-24 16:16:46.731273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.731299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.742889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f31b8 00:20:45.503 [2024-04-24 16:16:46.743905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.743933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.755298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ee5c8 00:20:45.503 [2024-04-24 16:16:46.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.756513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.767849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eff18 00:20:45.503 [2024-04-24 16:16:46.769266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.769292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.503 [2024-04-24 16:16:46.780508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e49b0 00:20:45.503 [2024-04-24 16:16:46.782006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.503 [2024-04-24 16:16:46.782032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.793069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f31b8 00:20:45.762 [2024-04-24 16:16:46.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.794705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.805207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f5be8 00:20:45.762 [2024-04-24 16:16:46.806887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.806916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.815197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e3060 00:20:45.762 [2024-04-24 16:16:46.815998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.816026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.827612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f5378 00:20:45.762 [2024-04-24 16:16:46.828513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.828556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.840077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190fcdd0 00:20:45.762 [2024-04-24 16:16:46.841157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.841184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.851085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f6cc8 00:20:45.762 [2024-04-24 16:16:46.852974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.853002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.861512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e01f8 00:20:45.762 [2024-04-24 16:16:46.862442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.862484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.873985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f7da8 00:20:45.762 [2024-04-24 16:16:46.875021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.875049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.887376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e7c50 00:20:45.762 [2024-04-24 16:16:46.888555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.888582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.899423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e9e10 00:20:45.762 [2024-04-24 16:16:46.900686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.900713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.911551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190fcdd0 00:20:45.762 [2024-04-24 16:16:46.912807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.912835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.923598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e0630 00:20:45.762 [2024-04-24 16:16:46.924788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.924825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.935639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ea248 00:20:45.762 [2024-04-24 16:16:46.936796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.936825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.949121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f31b8 00:20:45.762 [2024-04-24 16:16:46.950891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.950920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.961526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f7538 00:20:45.762 [2024-04-24 16:16:46.963544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.963571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.970000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eea00 00:20:45.762 [2024-04-24 16:16:46.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.762 [2024-04-24 16:16:46.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:45.762 [2024-04-24 16:16:46.981686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ee5c8 00:20:45.762 [2024-04-24 16:16:46.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:46.982711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:45.763 [2024-04-24 16:16:46.994130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eaab8 00:20:45.763 [2024-04-24 16:16:46.995340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:46.995367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:45.763 [2024-04-24 16:16:47.006566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190eaef0 00:20:45.763 [2024-04-24 16:16:47.007916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:47.007943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:45.763 [2024-04-24 16:16:47.019058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e9e10 00:20:45.763 [2024-04-24 16:16:47.020523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:47.020549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:45.763 [2024-04-24 16:16:47.031525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ee5c8 00:20:45.763 [2024-04-24 16:16:47.033179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:47.033207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:45.763 [2024-04-24 16:16:47.044721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190efae0 00:20:45.763 [2024-04-24 16:16:47.046721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.763 [2024-04-24 16:16:47.046764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.058327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f57b0 00:20:46.022 [2024-04-24 16:16:47.060701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.022 [2024-04-24 16:16:47.060734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.067814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e0630 00:20:46.022 [2024-04-24 16:16:47.068732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.022 [2024-04-24 16:16:47.068806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.080128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ec408 00:20:46.022 [2024-04-24 16:16:47.081013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.022 [2024-04-24 16:16:47.081040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.093876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f7da8 00:20:46.022 [2024-04-24 16:16:47.095037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.022 [2024-04-24 16:16:47.095080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.107551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e27f0 00:20:46.022 [2024-04-24 16:16:47.108841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.022 [2024-04-24 16:16:47.108868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:46.022 [2024-04-24 16:16:47.121201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e2c28 00:20:46.023 [2024-04-24 16:16:47.122662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.122692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.134807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190e5220 00:20:46.023 [2024-04-24 16:16:47.136396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.136427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.148322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f7da8 00:20:46.023 [2024-04-24 16:16:47.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.150174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.160824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190ef6a8 00:20:46.023 [2024-04-24 16:16:47.162252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.162283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.174131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.174401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.174432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.188305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.188596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.202432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.202727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.202782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.216671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.217000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.217044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.231139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.231430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.231461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.245520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.245825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.245853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.259865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.260193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.274332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.274599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.274629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.288751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.289013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.289055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.023 [2024-04-24 16:16:47.303256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.023 [2024-04-24 16:16:47.303526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.023 [2024-04-24 16:16:47.303567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.283 [2024-04-24 16:16:47.317797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.318101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.318133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.332250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.332571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.346647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.346952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.346980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.361083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.361372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.361402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.375584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.375943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.375984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.389980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.390287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.390323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.404469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.404764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.404806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.418911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.419204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.419233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.433348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.433654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.433697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.447491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.447814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.461802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.462068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.462114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.476098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.476360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.476391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.490598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.490941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.490969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.504906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.505198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.505228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.519327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.519621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.533701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.534003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.534030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.547957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.548280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.548310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.284 [2024-04-24 16:16:47.562315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.284 [2024-04-24 16:16:47.562604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.284 [2024-04-24 16:16:47.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.544 [2024-04-24 16:16:47.576730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.544 [2024-04-24 16:16:47.577034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.544 [2024-04-24 16:16:47.577062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.544 [2024-04-24 16:16:47.591153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.544 [2024-04-24 16:16:47.591442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.544 [2024-04-24 16:16:47.591472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.544 [2024-04-24 16:16:47.605524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.544 [2024-04-24 16:16:47.605839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.544 [2024-04-24 16:16:47.605867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.544 [2024-04-24 16:16:47.619849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.544 [2024-04-24 16:16:47.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.544 [2024-04-24 16:16:47.620182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.544 [2024-04-24 16:16:47.634281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.544 [2024-04-24 16:16:47.634580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.634610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.648675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.649009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.649036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.663222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.663530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.663560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.677679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.678003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.692209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.692516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.692557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.706377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.706668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.720582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.720891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.720919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.734885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.735177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.735207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.749279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.749548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.749578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.763724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.764055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.764091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.778173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.778440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.778470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.792511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.792852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.807004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.807311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.807340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.545 [2024-04-24 16:16:47.821499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.545 [2024-04-24 16:16:47.821815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.545 [2024-04-24 16:16:47.821841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 [2024-04-24 16:16:47.835947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.804 [2024-04-24 16:16:47.836255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.804 [2024-04-24 16:16:47.836287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 [2024-04-24 16:16:47.850334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.804 [2024-04-24 16:16:47.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.804 [2024-04-24 16:16:47.850658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 [2024-04-24 16:16:47.864976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.804 [2024-04-24 16:16:47.865286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.804 [2024-04-24 16:16:47.865315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 [2024-04-24 16:16:47.879301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.804 [2024-04-24 16:16:47.879594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.804 [2024-04-24 16:16:47.879624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 [2024-04-24 16:16:47.893801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846830) with pdu=0x2000190f9f68 00:20:46.804 [2024-04-24 16:16:47.894166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:46.804 [2024-04-24 16:16:47.894195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:46.804 00:20:46.804 Latency(us) 00:20:46.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.804 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:46.804 nvme0n1 : 2.01 19586.54 76.51 0.00 0.00 6519.57 2378.71 14660.65 00:20:46.804 =================================================================================================================== 00:20:46.804 Total : 19586.54 76.51 0.00 0.00 6519.57 2378.71 14660.65 00:20:46.804 0 00:20:46.804 16:16:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:46.804 16:16:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:46.804 16:16:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:46.804 16:16:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:46.804 | .driver_specific 00:20:46.804 | .nvme_error 00:20:46.804 | .status_code 00:20:46.804 | .command_transient_transport_error' 00:20:47.062 16:16:48 -- host/digest.sh@71 -- # (( 154 > 0 )) 00:20:47.062 16:16:48 -- host/digest.sh@73 -- # killprocess 3472180 00:20:47.062 16:16:48 -- common/autotest_common.sh@936 -- # '[' -z 3472180 ']' 00:20:47.062 16:16:48 -- common/autotest_common.sh@940 -- # kill -0 3472180 00:20:47.062 16:16:48 -- common/autotest_common.sh@941 -- # uname 00:20:47.062 16:16:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.062 16:16:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3472180 00:20:47.062 16:16:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:47.062 16:16:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:47.062 16:16:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3472180' 00:20:47.062 killing process with pid 3472180 00:20:47.062 16:16:48 -- common/autotest_common.sh@955 -- # kill 3472180 00:20:47.062 Received shutdown signal, test time was about 2.000000 seconds 00:20:47.062 00:20:47.062 Latency(us) 00:20:47.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.062 =================================================================================================================== 00:20:47.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.062 16:16:48 -- common/autotest_common.sh@960 -- # wait 3472180 00:20:47.320 16:16:48 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:47.320 16:16:48 -- host/digest.sh@54 -- # local rw bs qd 00:20:47.320 16:16:48 -- host/digest.sh@56 -- # rw=randwrite 00:20:47.320 16:16:48 -- host/digest.sh@56 -- # bs=131072 00:20:47.320 16:16:48 -- host/digest.sh@56 -- # qd=16 00:20:47.320 16:16:48 -- host/digest.sh@58 -- # bperfpid=3472594 00:20:47.320 16:16:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:47.320 16:16:48 -- host/digest.sh@60 -- # waitforlisten 3472594 /var/tmp/bperf.sock 00:20:47.320 16:16:48 -- common/autotest_common.sh@817 -- # '[' -z 3472594 ']' 00:20:47.320 16:16:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:47.320 16:16:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.320 16:16:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:47.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:47.320 16:16:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.320 16:16:48 -- common/autotest_common.sh@10 -- # set +x 00:20:47.320 [2024-04-24 16:16:48.479305] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:47.320 [2024-04-24 16:16:48.479385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472594 ] 00:20:47.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:47.320 Zero copy mechanism will not be used. 00:20:47.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.320 [2024-04-24 16:16:48.539073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.578 [2024-04-24 16:16:48.644639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.578 16:16:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.578 16:16:48 -- common/autotest_common.sh@850 -- # return 0 00:20:47.578 16:16:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.578 16:16:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.836 16:16:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.836 16:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.836 16:16:48 -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 16:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.836 16:16:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.836 16:16:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.094 nvme0n1 00:20:48.094 16:16:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:48.094 16:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.094 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:20:48.094 16:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.094 16:16:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:48.094 16:16:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:48.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:48.355 Zero copy mechanism will not be used. 00:20:48.355 Running I/O for 2 seconds... 00:20:48.355 [2024-04-24 16:16:49.444352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.444766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.444821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.460570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.461038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.476886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.477284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.477317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.493394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.493810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.493840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.511433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.511860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.511888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.529014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.529427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.529459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.545711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.546087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.546131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.561803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.562180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.562222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.576044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.576383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.576410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.591906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.355 [2024-04-24 16:16:49.592169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.355 [2024-04-24 16:16:49.592196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.355 [2024-04-24 16:16:49.607978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.356 [2024-04-24 16:16:49.608320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.356 [2024-04-24 16:16:49.608348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.356 [2024-04-24 16:16:49.623628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.356 [2024-04-24 16:16:49.623989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.356 [2024-04-24 16:16:49.624018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.356 [2024-04-24 16:16:49.638600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.356 [2024-04-24 16:16:49.638959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.356 [2024-04-24 16:16:49.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.655331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.655696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.655746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.671442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.671820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.671864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.685453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.685851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.685879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.700264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.700685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.700726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.716768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.717187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.732057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.732423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.732453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.748253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.748602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.748631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.763975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.764343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.616 [2024-04-24 16:16:49.779902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.616 [2024-04-24 16:16:49.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.616 [2024-04-24 16:16:49.780322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.795522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.795891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.795920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.811485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.811879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.811908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.827463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.827858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.827902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.844076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.844412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.844439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.860132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.860502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.860529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.875759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.876132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.876178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.617 [2024-04-24 16:16:49.892584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.617 [2024-04-24 16:16:49.892983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.617 [2024-04-24 16:16:49.893026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.907412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.907872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.907917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.924235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.924629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.924671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.939815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.940184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.940229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.954809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.955312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.969395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.969881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.969927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:49.984861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:49.985216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:49.985245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:50.000975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:50.001356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:50.001389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:50.013530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:50.013918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:50.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:50.027843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.876 [2024-04-24 16:16:50.028206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.876 [2024-04-24 16:16:50.028242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.876 [2024-04-24 16:16:50.042384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.042780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.042838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.057547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.058217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.058245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.073927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.074320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.088837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.089154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.089182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.104775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.105140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.105171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.120952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.121329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.136332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.136685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.877 [2024-04-24 16:16:50.152289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:48.877 [2024-04-24 16:16:50.152652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.877 [2024-04-24 16:16:50.152696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.167336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.167767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.182830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.183231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.183275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.198838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.199226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.199269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.214908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.215282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.215312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.229912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.230272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.230317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.245532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.245907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.245951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.261579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.261930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.261978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.276996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.277387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.277430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.292836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.293211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.293239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.308985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.309424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.324455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.324990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.340356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.340753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.340781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.356892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.357272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.357316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.372876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.373313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.373355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.389184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.389543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.389572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.405500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.405937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.136 [2024-04-24 16:16:50.420880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.136 [2024-04-24 16:16:50.421259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.136 [2024-04-24 16:16:50.421303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.437046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.437408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.437437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.453153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.453539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.453567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.469753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.470189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.470240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.484493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.484921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.484952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.500237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.500633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.500677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.515581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.515936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.515965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.531773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.532044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.548197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.548569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.548612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.564193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.564621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.579558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.579910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.579954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.595068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.595402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.595431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.609236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.609601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.624407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.624801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.624830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.640092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.640496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.640525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.656632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.657094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.397 [2024-04-24 16:16:50.672528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.397 [2024-04-24 16:16:50.672877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.397 [2024-04-24 16:16:50.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.688654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.689022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.689053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.705819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.706175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.706218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.721659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.721964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.722008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.736340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.736718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.736773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.751620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.752005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.752049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.768427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.768799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.658 [2024-04-24 16:16:50.768844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.658 [2024-04-24 16:16:50.781959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.658 [2024-04-24 16:16:50.782314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.782357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.797106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.797478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.813410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.813822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.813865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.829139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.829503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.829530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.844422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.844849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.844877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.858700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.859093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.859121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.874684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.875088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.875115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.890459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.890900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.890927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.907257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.907689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.907716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.922997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.923385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.923413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.659 [2024-04-24 16:16:50.937787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.659 [2024-04-24 16:16:50.938199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.659 [2024-04-24 16:16:50.938228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.919 [2024-04-24 16:16:50.953642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.919 [2024-04-24 16:16:50.954007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.919 [2024-04-24 16:16:50.954038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:50.969500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:50.969960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:50.969989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:50.985196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:50.985443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:50.985474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.001822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.002176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.002204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.018925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.019284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.019313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.033168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.033493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.033520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.047823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.048202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.048244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.064420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.064813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.064857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.079964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.080333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.080361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.095191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.095557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.095585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.111539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.111934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.111977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.125592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.126027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.126061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.142237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.142641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.142667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.158932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.159282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.159309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.174168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.174555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.920 [2024-04-24 16:16:51.190453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:49.920 [2024-04-24 16:16:51.190889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.920 [2024-04-24 16:16:51.190917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.205119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.205488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.205517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.221508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.221989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.237242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.237614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.237652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.253439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.253947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.253978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.268355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.268847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.268893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.283245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.283736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.283771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.298287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.298920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.298949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.313515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.314012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.327789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.328305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.328334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.342377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.342879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.342922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.357381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.357863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.357892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.371588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.372241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.372269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.386811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.387278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.387306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.401541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.402178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.402220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.416881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.417416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.417443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:50.181 [2024-04-24 16:16:51.430767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x846be0) with pdu=0x2000190fef90 00:20:50.181 [2024-04-24 16:16:51.431147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.181 [2024-04-24 16:16:51.431191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.181 00:20:50.181 Latency(us) 00:20:50.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:50.181 nvme0n1 : 2.01 1979.83 247.48 0.00 0.00 8061.02 5971.06 18058.81 00:20:50.181 =================================================================================================================== 00:20:50.181 Total : 1979.83 247.48 0.00 0.00 8061.02 5971.06 18058.81 00:20:50.181 0 00:20:50.181 16:16:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:50.181 16:16:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:50.181 16:16:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:50.181 | .driver_specific 00:20:50.181 | .nvme_error 00:20:50.181 | .status_code 00:20:50.181 | .command_transient_transport_error' 00:20:50.181 16:16:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:50.439 16:16:51 -- host/digest.sh@71 -- # (( 128 > 0 )) 00:20:50.439 16:16:51 -- host/digest.sh@73 -- # killprocess 3472594 00:20:50.439 16:16:51 -- common/autotest_common.sh@936 -- # '[' -z 3472594 ']' 00:20:50.439 16:16:51 -- common/autotest_common.sh@940 -- # kill -0 3472594 00:20:50.439 16:16:51 -- common/autotest_common.sh@941 -- # uname 00:20:50.439 16:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.439 16:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3472594 00:20:50.699 16:16:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:50.699 16:16:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:50.699 16:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3472594' 00:20:50.699 killing process with pid 3472594 00:20:50.699 16:16:51 -- common/autotest_common.sh@955 -- # kill 3472594 00:20:50.699 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.699 00:20:50.699 Latency(us) 00:20:50.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.699 =================================================================================================================== 00:20:50.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.699 16:16:51 -- common/autotest_common.sh@960 -- # wait 3472594 00:20:50.960 16:16:52 -- host/digest.sh@116 -- # killprocess 3471223 00:20:50.960 16:16:52 -- common/autotest_common.sh@936 -- # '[' -z 3471223 ']' 00:20:50.960 16:16:52 -- common/autotest_common.sh@940 -- # kill -0 3471223 00:20:50.960 16:16:52 -- common/autotest_common.sh@941 -- # uname 00:20:50.960 16:16:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.960 16:16:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3471223 00:20:50.960 16:16:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:50.960 16:16:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:50.960 16:16:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3471223' 00:20:50.960 killing process with pid 3471223 00:20:50.960 16:16:52 -- common/autotest_common.sh@955 -- # kill 3471223 00:20:50.960 16:16:52 -- common/autotest_common.sh@960 -- # wait 3471223 00:20:51.219 00:20:51.219 real 0m15.319s 00:20:51.219 user 0m30.642s 00:20:51.219 sys 0m3.849s 00:20:51.219 16:16:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:51.219 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:20:51.219 ************************************ 00:20:51.219 END TEST nvmf_digest_error 00:20:51.219 ************************************ 00:20:51.219 16:16:52 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:51.219 16:16:52 -- host/digest.sh@150 -- # nvmftestfini 00:20:51.219 16:16:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:51.219 16:16:52 -- nvmf/common.sh@117 -- # sync 00:20:51.220 16:16:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.220 16:16:52 -- nvmf/common.sh@120 -- # set +e 00:20:51.220 16:16:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.220 16:16:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.220 rmmod nvme_tcp 00:20:51.220 rmmod nvme_fabrics 00:20:51.220 rmmod nvme_keyring 00:20:51.220 16:16:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.220 16:16:52 -- nvmf/common.sh@124 -- # set -e 00:20:51.220 16:16:52 -- nvmf/common.sh@125 -- # return 0 00:20:51.220 16:16:52 -- nvmf/common.sh@478 -- # '[' -n 3471223 ']' 00:20:51.220 16:16:52 -- nvmf/common.sh@479 -- # killprocess 3471223 00:20:51.220 16:16:52 -- common/autotest_common.sh@936 -- # '[' -z 3471223 ']' 00:20:51.220 16:16:52 -- common/autotest_common.sh@940 -- # kill -0 3471223 00:20:51.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3471223) - No such process 00:20:51.220 16:16:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3471223 is not found' 00:20:51.220 Process with pid 3471223 is not found 00:20:51.220 16:16:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:51.220 16:16:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:51.220 16:16:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:51.220 16:16:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.220 16:16:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.220 16:16:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.220 16:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.220 16:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.126 16:16:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.384 00:20:53.384 real 0m36.751s 00:20:53.384 user 1m4.377s 00:20:53.384 sys 0m9.420s 00:20:53.384 16:16:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.384 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.384 ************************************ 00:20:53.384 END TEST nvmf_digest 00:20:53.384 ************************************ 00:20:53.384 16:16:54 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:53.384 16:16:54 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:53.384 16:16:54 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:53.384 16:16:54 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:53.384 16:16:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:53.384 16:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.384 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.384 ************************************ 00:20:53.384 START TEST nvmf_bdevperf 00:20:53.384 ************************************ 00:20:53.384 16:16:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:53.384 * Looking for test storage... 00:20:53.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:53.384 16:16:54 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.384 16:16:54 -- nvmf/common.sh@7 -- # uname -s 00:20:53.384 16:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.384 16:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.384 16:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.384 16:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.384 16:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.384 16:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.384 16:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.384 16:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.384 16:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.384 16:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.385 16:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:53.385 16:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:53.385 16:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.385 16:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.385 16:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.385 16:16:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.385 16:16:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.385 16:16:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.385 16:16:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.385 16:16:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.385 16:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.385 16:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.385 16:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.385 16:16:54 -- paths/export.sh@5 -- # export PATH 00:20:53.385 16:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.385 16:16:54 -- nvmf/common.sh@47 -- # : 0 00:20:53.385 16:16:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.385 16:16:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.385 16:16:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.385 16:16:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.385 16:16:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.385 16:16:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.385 16:16:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.385 16:16:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.385 16:16:54 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.385 16:16:54 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.385 16:16:54 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:53.385 16:16:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:53.385 16:16:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.385 16:16:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:53.385 16:16:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:53.385 16:16:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:53.385 16:16:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.385 16:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.385 16:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.385 16:16:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:53.385 16:16:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:53.385 16:16:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.385 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:55.292 16:16:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.292 16:16:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.292 16:16:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.292 16:16:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.292 16:16:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.292 16:16:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.292 16:16:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.292 16:16:56 -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.292 16:16:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.292 16:16:56 -- nvmf/common.sh@296 -- # e810=() 00:20:55.292 16:16:56 -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.292 16:16:56 -- nvmf/common.sh@297 -- # x722=() 00:20:55.292 16:16:56 -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.292 16:16:56 -- nvmf/common.sh@298 -- # mlx=() 00:20:55.292 16:16:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.292 16:16:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.292 16:16:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.292 16:16:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.292 16:16:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.292 16:16:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.292 16:16:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:55.292 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:55.292 16:16:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.292 16:16:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:55.292 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:55.292 16:16:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.292 16:16:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.292 16:16:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.292 16:16:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.292 16:16:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.292 16:16:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.292 16:16:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:55.292 Found net devices under 0000:09:00.0: cvl_0_0 00:20:55.292 16:16:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.292 16:16:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.292 16:16:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.292 16:16:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.292 16:16:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.292 16:16:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:55.292 Found net devices under 0000:09:00.1: cvl_0_1 00:20:55.293 16:16:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.293 16:16:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:55.293 16:16:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:55.293 16:16:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:55.293 16:16:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:55.293 16:16:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:55.293 16:16:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.293 16:16:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.293 16:16:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.293 16:16:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.293 16:16:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.293 16:16:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.293 16:16:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.293 16:16:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.293 16:16:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.293 16:16:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.293 16:16:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.293 16:16:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.293 16:16:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.551 16:16:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.551 16:16:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.551 16:16:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.551 16:16:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.551 16:16:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.551 16:16:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.551 16:16:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:20:55.551 00:20:55.551 --- 10.0.0.2 ping statistics --- 00:20:55.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.551 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:20:55.551 16:16:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:55.551 00:20:55.551 --- 10.0.0.1 ping statistics --- 00:20:55.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.551 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:55.551 16:16:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.551 16:16:56 -- nvmf/common.sh@411 -- # return 0 00:20:55.551 16:16:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:55.551 16:16:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.551 16:16:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:55.551 16:16:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:55.551 16:16:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.551 16:16:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:55.551 16:16:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:55.551 16:16:56 -- host/bdevperf.sh@25 -- # tgt_init 00:20:55.551 16:16:56 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:55.551 16:16:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:55.551 16:16:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.551 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:55.551 16:16:56 -- nvmf/common.sh@470 -- # nvmfpid=3474947 00:20:55.552 16:16:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:55.552 16:16:56 -- nvmf/common.sh@471 -- # waitforlisten 3474947 00:20:55.552 16:16:56 -- common/autotest_common.sh@817 -- # '[' -z 3474947 ']' 00:20:55.552 16:16:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.552 16:16:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.552 16:16:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.552 16:16:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.552 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 [2024-04-24 16:16:56.729597] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:55.552 [2024-04-24 16:16:56.729689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.552 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.552 [2024-04-24 16:16:56.798657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:55.809 [2024-04-24 16:16:56.918428] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.809 [2024-04-24 16:16:56.918487] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.809 [2024-04-24 16:16:56.918501] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.809 [2024-04-24 16:16:56.918513] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.809 [2024-04-24 16:16:56.918540] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.809 [2024-04-24 16:16:56.918638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.809 [2024-04-24 16:16:56.918698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.809 [2024-04-24 16:16:56.918701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.809 16:16:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.809 16:16:57 -- common/autotest_common.sh@850 -- # return 0 00:20:55.809 16:16:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:55.809 16:16:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.810 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 16:16:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.810 16:16:57 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.810 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.810 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 [2024-04-24 16:16:57.057237] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.810 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.810 16:16:57 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:55.810 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.810 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 Malloc0 00:20:56.068 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.068 16:16:57 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.068 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.068 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.068 16:16:57 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.068 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.068 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.068 16:16:57 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.068 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.068 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 [2024-04-24 16:16:57.122626] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.068 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.068 16:16:57 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:56.068 16:16:57 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:56.068 16:16:57 -- nvmf/common.sh@521 -- # config=() 00:20:56.068 16:16:57 -- nvmf/common.sh@521 -- # local subsystem config 00:20:56.068 16:16:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.068 16:16:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.068 { 00:20:56.068 "params": { 00:20:56.068 "name": "Nvme$subsystem", 00:20:56.068 "trtype": "$TEST_TRANSPORT", 00:20:56.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.068 "adrfam": "ipv4", 00:20:56.068 "trsvcid": "$NVMF_PORT", 00:20:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.068 "hdgst": ${hdgst:-false}, 00:20:56.068 "ddgst": ${ddgst:-false} 00:20:56.068 }, 00:20:56.068 "method": "bdev_nvme_attach_controller" 00:20:56.068 } 00:20:56.068 EOF 00:20:56.068 )") 00:20:56.068 16:16:57 -- nvmf/common.sh@543 -- # cat 00:20:56.068 16:16:57 -- nvmf/common.sh@545 -- # jq . 00:20:56.068 16:16:57 -- nvmf/common.sh@546 -- # IFS=, 00:20:56.068 16:16:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:56.068 "params": { 00:20:56.068 "name": "Nvme1", 00:20:56.068 "trtype": "tcp", 00:20:56.068 "traddr": "10.0.0.2", 00:20:56.068 "adrfam": "ipv4", 00:20:56.068 "trsvcid": "4420", 00:20:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.068 "hdgst": false, 00:20:56.068 "ddgst": false 00:20:56.068 }, 00:20:56.068 "method": "bdev_nvme_attach_controller" 00:20:56.068 }' 00:20:56.068 [2024-04-24 16:16:57.166359] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:56.069 [2024-04-24 16:16:57.166432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475090 ] 00:20:56.069 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.069 [2024-04-24 16:16:57.225816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.069 [2024-04-24 16:16:57.333416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.327 Running I/O for 1 seconds... 00:20:57.701 00:20:57.701 Latency(us) 00:20:57.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.701 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.701 Verification LBA range: start 0x0 length 0x4000 00:20:57.701 Nvme1n1 : 1.01 8751.10 34.18 0.00 0.00 14569.69 3131.16 16311.18 00:20:57.701 =================================================================================================================== 00:20:57.701 Total : 8751.10 34.18 0.00 0.00 14569.69 3131.16 16311.18 00:20:57.701 16:16:58 -- host/bdevperf.sh@30 -- # bdevperfpid=3475238 00:20:57.701 16:16:58 -- host/bdevperf.sh@32 -- # sleep 3 00:20:57.701 16:16:58 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:57.701 16:16:58 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:57.701 16:16:58 -- nvmf/common.sh@521 -- # config=() 00:20:57.701 16:16:58 -- nvmf/common.sh@521 -- # local subsystem config 00:20:57.701 16:16:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:57.701 16:16:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:57.701 { 00:20:57.701 "params": { 00:20:57.701 "name": "Nvme$subsystem", 00:20:57.701 "trtype": "$TEST_TRANSPORT", 00:20:57.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.701 "adrfam": "ipv4", 00:20:57.701 "trsvcid": "$NVMF_PORT", 00:20:57.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.701 "hdgst": ${hdgst:-false}, 00:20:57.701 "ddgst": ${ddgst:-false} 00:20:57.701 }, 00:20:57.701 "method": "bdev_nvme_attach_controller" 00:20:57.701 } 00:20:57.701 EOF 00:20:57.701 )") 00:20:57.701 16:16:58 -- nvmf/common.sh@543 -- # cat 00:20:57.701 16:16:58 -- nvmf/common.sh@545 -- # jq . 00:20:57.701 16:16:58 -- nvmf/common.sh@546 -- # IFS=, 00:20:57.701 16:16:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:57.701 "params": { 00:20:57.701 "name": "Nvme1", 00:20:57.701 "trtype": "tcp", 00:20:57.701 "traddr": "10.0.0.2", 00:20:57.701 "adrfam": "ipv4", 00:20:57.701 "trsvcid": "4420", 00:20:57.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.701 "hdgst": false, 00:20:57.701 "ddgst": false 00:20:57.701 }, 00:20:57.701 "method": "bdev_nvme_attach_controller" 00:20:57.701 }' 00:20:57.701 [2024-04-24 16:16:58.866610] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:20:57.701 [2024-04-24 16:16:58.866687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475238 ] 00:20:57.701 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.702 [2024-04-24 16:16:58.928090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.959 [2024-04-24 16:16:59.032005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.217 Running I/O for 15 seconds... 00:21:00.750 16:17:01 -- host/bdevperf.sh@33 -- # kill -9 3474947 00:21:00.750 16:17:01 -- host/bdevperf.sh@35 -- # sleep 3 00:21:00.750 [2024-04-24 16:17:01.838572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.750 [2024-04-24 16:17:01.838627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.838980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.838996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.750 [2024-04-24 16:17:01.839438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.750 [2024-04-24 16:17:01.839472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.750 [2024-04-24 16:17:01.839512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.750 [2024-04-24 16:17:01.839546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.750 [2024-04-24 16:17:01.839580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.750 [2024-04-24 16:17:01.839598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.839970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.839986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.751 [2024-04-24 16:17:01.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.751 [2024-04-24 16:17:01.840964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.751 [2024-04-24 16:17:01.840980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.840995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.841980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.841995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.752 [2024-04-24 16:17:01.842369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.752 [2024-04-24 16:17:01.842385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.842970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.842986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.843033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.843074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.843109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.843144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.753 [2024-04-24 16:17:01.843179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75a0 is same with the state(5) to be set 00:21:00.753 [2024-04-24 16:17:01.843215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.753 [2024-04-24 16:17:01.843228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.753 [2024-04-24 16:17:01.843242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36672 len:8 PRP1 0x0 PRP2 0x0 00:21:00.753 [2024-04-24 16:17:01.843258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.753 [2024-04-24 16:17:01.843329] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e75a0 was disconnected and freed. reset controller. 00:21:00.753 [2024-04-24 16:17:01.847173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.753 [2024-04-24 16:17:01.847248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.753 [2024-04-24 16:17:01.847954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.848534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.848609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.753 [2024-04-24 16:17:01.848631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.753 [2024-04-24 16:17:01.848895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.753 [2024-04-24 16:17:01.849150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.753 [2024-04-24 16:17:01.849174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.753 [2024-04-24 16:17:01.849193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.753 [2024-04-24 16:17:01.852719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.753 [2024-04-24 16:17:01.861328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.753 [2024-04-24 16:17:01.861772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.861986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.862012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.753 [2024-04-24 16:17:01.862029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.753 [2024-04-24 16:17:01.862287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.753 [2024-04-24 16:17:01.862535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.753 [2024-04-24 16:17:01.862560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.753 [2024-04-24 16:17:01.862576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.753 [2024-04-24 16:17:01.866133] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.753 [2024-04-24 16:17:01.875344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.753 [2024-04-24 16:17:01.875787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.875971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.876000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.753 [2024-04-24 16:17:01.876019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.753 [2024-04-24 16:17:01.876257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.753 [2024-04-24 16:17:01.876498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.753 [2024-04-24 16:17:01.876521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.753 [2024-04-24 16:17:01.876537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.753 [2024-04-24 16:17:01.880090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.753 [2024-04-24 16:17:01.889294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.753 [2024-04-24 16:17:01.889747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.889973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.753 [2024-04-24 16:17:01.889999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.890015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.890283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.890524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.890547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.890563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.894114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.903127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.903564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.903783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.903813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.903832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.904070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.904317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.904340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.904357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.907905] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.917147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.917565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.917777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.917803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.917820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.918062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.918303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.918326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.918342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.921893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.931106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.931516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.931733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.931769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.931799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.932049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.932290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.932314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.932330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.935883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.945100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.945536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.945713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.945750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.945771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.946008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.946249] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.946273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.946295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.949849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.959058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.959505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.959707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.959733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.959759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.960000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.960241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.960264] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.960281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.963832] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.973034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.973442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.973652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.973678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.754 [2024-04-24 16:17:01.973694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.754 [2024-04-24 16:17:01.973947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.754 [2024-04-24 16:17:01.974207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.754 [2024-04-24 16:17:01.974230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.754 [2024-04-24 16:17:01.974246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.754 [2024-04-24 16:17:01.977808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.754 [2024-04-24 16:17:01.987052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.754 [2024-04-24 16:17:01.987502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.754 [2024-04-24 16:17:01.987753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:01.987781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.755 [2024-04-24 16:17:01.987799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.755 [2024-04-24 16:17:01.988051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.755 [2024-04-24 16:17:01.988291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.755 [2024-04-24 16:17:01.988315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.755 [2024-04-24 16:17:01.988339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.755 [2024-04-24 16:17:01.991901] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.755 [2024-04-24 16:17:02.000927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.755 [2024-04-24 16:17:02.001340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.001613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.001652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.755 [2024-04-24 16:17:02.001668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.755 [2024-04-24 16:17:02.001923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.755 [2024-04-24 16:17:02.002165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.755 [2024-04-24 16:17:02.002188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.755 [2024-04-24 16:17:02.002204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.755 [2024-04-24 16:17:02.005764] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.755 [2024-04-24 16:17:02.014767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.755 [2024-04-24 16:17:02.015205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.015400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.015426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.755 [2024-04-24 16:17:02.015457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.755 [2024-04-24 16:17:02.015703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.755 [2024-04-24 16:17:02.015955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.755 [2024-04-24 16:17:02.015979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.755 [2024-04-24 16:17:02.015995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.755 [2024-04-24 16:17:02.019542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.755 [2024-04-24 16:17:02.028754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.755 [2024-04-24 16:17:02.029190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.029379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.755 [2024-04-24 16:17:02.029404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:00.755 [2024-04-24 16:17:02.029421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:00.755 [2024-04-24 16:17:02.029670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:00.755 [2024-04-24 16:17:02.029924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.755 [2024-04-24 16:17:02.029949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.755 [2024-04-24 16:17:02.029965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.033512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.042729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.043164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.043404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.043432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.043450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.043694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.043947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.043971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.043987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.047533] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.056548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.056983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.057181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.057226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.057244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.057482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.057723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.057758] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.057776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.061460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.070465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.070922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.071093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.071120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.071138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.071391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.071632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.071655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.071672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.075234] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.084441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.084936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.085125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.085154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.085172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.085410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.085650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.085673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.085689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.089245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.098470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.098888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.099041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.099072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.099091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.099329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.099569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.099592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.099608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.103159] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.112365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.112860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.113034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.113060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.113077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.113291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.113537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.113561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.113577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.117131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.126335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.126768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.126961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.126990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.127009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.127247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.127487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.127511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.017 [2024-04-24 16:17:02.127528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.017 [2024-04-24 16:17:02.131087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.017 [2024-04-24 16:17:02.140234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.017 [2024-04-24 16:17:02.140679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.140867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.017 [2024-04-24 16:17:02.140894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.017 [2024-04-24 16:17:02.140911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.017 [2024-04-24 16:17:02.141153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.017 [2024-04-24 16:17:02.141402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.017 [2024-04-24 16:17:02.141426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.141442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.144814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.153610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.154045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.154215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.154241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.154258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.154509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.154706] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.154749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.154766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.157769] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.166890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.167330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.167494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.167520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.167542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.167777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.167981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.168001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.168015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.170971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.180226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.180659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.180833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.180862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.180879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.181132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.181330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.181349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.181362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.184333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.193428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.193816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.193974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.193999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.194016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.194254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.194451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.194470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.194483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.197445] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.206698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.207129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.207299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.207325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.207342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.207598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.207822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.207842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.207856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.210820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.220022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.220422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.220592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.220619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.220636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.220881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.221108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.221127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.221141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.224097] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.233329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.233689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.233906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.233933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.233949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.234203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.234401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.234420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.234432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.237393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.246595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.247043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.247229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.247255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.247271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.247513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.247726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.247770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.247784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.250719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.259845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.260234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.260418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.260458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.018 [2024-04-24 16:17:02.260474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.018 [2024-04-24 16:17:02.260697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.018 [2024-04-24 16:17:02.260938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.018 [2024-04-24 16:17:02.260959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.018 [2024-04-24 16:17:02.260972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.018 [2024-04-24 16:17:02.263926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.018 [2024-04-24 16:17:02.273171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.018 [2024-04-24 16:17:02.273557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.018 [2024-04-24 16:17:02.273727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.019 [2024-04-24 16:17:02.273763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.019 [2024-04-24 16:17:02.273780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.019 [2024-04-24 16:17:02.274008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.019 [2024-04-24 16:17:02.274242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.019 [2024-04-24 16:17:02.274262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.019 [2024-04-24 16:17:02.274275] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.019 [2024-04-24 16:17:02.277237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.019 [2024-04-24 16:17:02.286346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.019 [2024-04-24 16:17:02.286761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.019 [2024-04-24 16:17:02.286945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.019 [2024-04-24 16:17:02.286971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.019 [2024-04-24 16:17:02.286987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.019 [2024-04-24 16:17:02.287227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.019 [2024-04-24 16:17:02.287428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.019 [2024-04-24 16:17:02.287448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.019 [2024-04-24 16:17:02.287461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.019 [2024-04-24 16:17:02.290423] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.294 [2024-04-24 16:17:02.300017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.294 [2024-04-24 16:17:02.300420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.294 [2024-04-24 16:17:02.300628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.294 [2024-04-24 16:17:02.300653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.294 [2024-04-24 16:17:02.300670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.294 [2024-04-24 16:17:02.300923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.294 [2024-04-24 16:17:02.301171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.294 [2024-04-24 16:17:02.301192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.294 [2024-04-24 16:17:02.301206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.304311] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.313178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.313534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.313656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.313680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.313696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.313927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.314131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.314150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.314164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.317091] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.326419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.326840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.327005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.327030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.327047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.327299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.327497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.327520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.327534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.330491] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.339739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.340173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.340328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.340356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.340372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.340601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.340840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.340862] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.340878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.344189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.353407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.353780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.353922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.353948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.353964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.354204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.354408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.354427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.354441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.357501] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.366708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.367181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.367349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.367374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.367391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.367632] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.367898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.367920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.367940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.371014] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.379919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.380358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.380526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.380551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.380568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.380807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.381041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.381077] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.381091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.384084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.393232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.393614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.393780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.393807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.393823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.394074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.394271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.394290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.394303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.397263] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.406503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.406946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.407092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.407118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.407134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.407375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.407573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.407591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.407605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.410605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.419826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.420237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.420411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.420451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.295 [2024-04-24 16:17:02.420467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.295 [2024-04-24 16:17:02.420699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.295 [2024-04-24 16:17:02.420925] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.295 [2024-04-24 16:17:02.420945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.295 [2024-04-24 16:17:02.420959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.295 [2024-04-24 16:17:02.423919] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.295 [2024-04-24 16:17:02.433162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.295 [2024-04-24 16:17:02.433590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.295 [2024-04-24 16:17:02.433762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.433788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.433805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.434044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.434242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.434261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.434274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.437233] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.446430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.446849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.446990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.447016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.447032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.447276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.447473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.447492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.447506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.450466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.459659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.460072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.460279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.460305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.460321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.460563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.460802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.460832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.460846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.463802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.473031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.473466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.473606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.473630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.473646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.473910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.474126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.474146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.474159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.477121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.486219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.486637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.486804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.486832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.486849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.487101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.487298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.487317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.487330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.490291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.499530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.499890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.500075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.500099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.500115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.500315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.500527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.500546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.500560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.503521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.512713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.513162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.513302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.513327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.513343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.513590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.513814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.513835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.513849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.516803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.525994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.526431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.526565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.526590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.526607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.526873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.527106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.527126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.527139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.530095] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.539158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.539532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.539735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.539774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.539792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.296 [2024-04-24 16:17:02.540033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.296 [2024-04-24 16:17:02.540248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.296 [2024-04-24 16:17:02.540267] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.296 [2024-04-24 16:17:02.540280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.296 [2024-04-24 16:17:02.543241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.296 [2024-04-24 16:17:02.552431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.296 [2024-04-24 16:17:02.552777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.552992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.296 [2024-04-24 16:17:02.553018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.296 [2024-04-24 16:17:02.553034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.297 [2024-04-24 16:17:02.553273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.297 [2024-04-24 16:17:02.553486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.297 [2024-04-24 16:17:02.553505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.297 [2024-04-24 16:17:02.553519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.297 [2024-04-24 16:17:02.556476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.297 [2024-04-24 16:17:02.565704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.297 [2024-04-24 16:17:02.566160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.297 [2024-04-24 16:17:02.566297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.297 [2024-04-24 16:17:02.566322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.297 [2024-04-24 16:17:02.566339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.297 [2024-04-24 16:17:02.566589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.297 [2024-04-24 16:17:02.566814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.297 [2024-04-24 16:17:02.566835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.297 [2024-04-24 16:17:02.566848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.297 [2024-04-24 16:17:02.569841] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.578970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.579374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.579545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.579571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.579592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.579848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.580051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.580071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.580085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.583124] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.592162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.592572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.592775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.592802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.592819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.593048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.593277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.593298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.593313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.596597] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.605831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.606206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.606389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.606414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.606431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.606644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.606870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.606892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.606907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.610139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.619219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.619594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.619734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.619769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.619785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.620008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.620240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.620259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.620273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.623301] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.632497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.632882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.633074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.633100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.633131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.633365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.633562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.633581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.633594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.636580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.645847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.646286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.646449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.646475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.563 [2024-04-24 16:17:02.646491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.563 [2024-04-24 16:17:02.646730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.563 [2024-04-24 16:17:02.646956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.563 [2024-04-24 16:17:02.646976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.563 [2024-04-24 16:17:02.646990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.563 [2024-04-24 16:17:02.649989] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.563 [2024-04-24 16:17:02.659273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.563 [2024-04-24 16:17:02.659652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.659824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.563 [2024-04-24 16:17:02.659850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.659867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.660118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.660320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.660339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.660353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.663316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.672600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.672999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.673161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.673187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.673204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.673455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.673653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.673672] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.673685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.676652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.685794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.686274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.686438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.686463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.686480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.686724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.686966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.686987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.687001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.689961] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.698966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.699372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.699548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.699573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.699589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.699845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.700050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.700083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.700097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.703152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.712241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.712657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.712803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.712830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.712847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.713073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.713286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.713305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.713318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.716285] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.725503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.725893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.726081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.726106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.726122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.726374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.726571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.726591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.726604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.729567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.738831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.739245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.739437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.739462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.739479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.739727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.739952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.739973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.739991] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.742954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.751999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.752434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.752595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.752621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.752637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.752917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.753135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.753155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.753168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.756130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.765197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.765577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.765728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.765762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.564 [2024-04-24 16:17:02.765780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.564 [2024-04-24 16:17:02.766022] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.564 [2024-04-24 16:17:02.766236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.564 [2024-04-24 16:17:02.766256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.564 [2024-04-24 16:17:02.766269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.564 [2024-04-24 16:17:02.769243] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.564 [2024-04-24 16:17:02.778490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.564 [2024-04-24 16:17:02.778951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.564 [2024-04-24 16:17:02.779118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.779143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.779160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.779399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.779597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.779616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.779629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.565 [2024-04-24 16:17:02.782596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.565 [2024-04-24 16:17:02.791814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.565 [2024-04-24 16:17:02.792256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.792439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.792465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.792481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.792724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.792966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.792988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.793002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.565 [2024-04-24 16:17:02.795960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.565 [2024-04-24 16:17:02.805035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.565 [2024-04-24 16:17:02.805469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.805632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.805657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.805674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.805926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.806142] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.806162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.806175] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.565 [2024-04-24 16:17:02.809134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.565 [2024-04-24 16:17:02.818323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.565 [2024-04-24 16:17:02.818669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.818835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.818862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.818878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.819107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.819305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.819324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.819337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.565 [2024-04-24 16:17:02.822294] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.565 [2024-04-24 16:17:02.831486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.565 [2024-04-24 16:17:02.831930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.832093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.832119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.832135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.832386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.832583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.832603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.832616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.565 [2024-04-24 16:17:02.835579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.565 [2024-04-24 16:17:02.844748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.565 [2024-04-24 16:17:02.845144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.845285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.565 [2024-04-24 16:17:02.845310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.565 [2024-04-24 16:17:02.845327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.565 [2024-04-24 16:17:02.845540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.565 [2024-04-24 16:17:02.845764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.565 [2024-04-24 16:17:02.845796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.565 [2024-04-24 16:17:02.845810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.841 [2024-04-24 16:17:02.849098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.841 [2024-04-24 16:17:02.858278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.841 [2024-04-24 16:17:02.858657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.841 [2024-04-24 16:17:02.858795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.858822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.858838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.859067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.859288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.859309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.859323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.842 [2024-04-24 16:17:02.862428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.842 [2024-04-24 16:17:02.871702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.842 [2024-04-24 16:17:02.872109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.872280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.872307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.872323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.872561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.872806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.872828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.872843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.842 [2024-04-24 16:17:02.876195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.842 [2024-04-24 16:17:02.885059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.842 [2024-04-24 16:17:02.885491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.885652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.885678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.885695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.885932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.886166] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.886186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.886199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.842 [2024-04-24 16:17:02.889157] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.842 [2024-04-24 16:17:02.899011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.842 [2024-04-24 16:17:02.899438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.899586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.899625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.899642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.899890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.900133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.900157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.900172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.842 [2024-04-24 16:17:02.903722] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.842 [2024-04-24 16:17:02.912933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.842 [2024-04-24 16:17:02.913376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.913618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.913681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.913700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.913948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.914189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.914213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.914229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.842 [2024-04-24 16:17:02.917779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.842 [2024-04-24 16:17:02.926776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.842 [2024-04-24 16:17:02.927289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.927582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.842 [2024-04-24 16:17:02.927635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.842 [2024-04-24 16:17:02.927654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.842 [2024-04-24 16:17:02.927902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.842 [2024-04-24 16:17:02.928144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.842 [2024-04-24 16:17:02.928167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.842 [2024-04-24 16:17:02.928183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:02.931728] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:02.940720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:02.941163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.941383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.941432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:02.941451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:02.941688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:02.941940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:02.941964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:02.941980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:02.945526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:02.954728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:02.955175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.955347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.955387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:02.955409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:02.955660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:02.955914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:02.955938] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:02.955954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:02.959502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:02.968704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:02.969144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.969313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.969354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:02.969370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:02.969612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:02.969866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:02.969891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:02.969907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:02.973450] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:02.982660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:02.983120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.983301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.983329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:02.983347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:02.983584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:02.983838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:02.983862] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:02.983878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:02.987424] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:02.996649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:02.997154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.997447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:02.997493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:02.997512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:02.997764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:02.998007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:02.998031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:02.998047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:03.001615] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:03.010621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:03.011109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:03.011319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:03.011344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.843 [2024-04-24 16:17:03.011360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.843 [2024-04-24 16:17:03.011591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.843 [2024-04-24 16:17:03.011844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.843 [2024-04-24 16:17:03.011868] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.843 [2024-04-24 16:17:03.011884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.843 [2024-04-24 16:17:03.015428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.843 [2024-04-24 16:17:03.024497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.843 [2024-04-24 16:17:03.024944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:03.025202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.843 [2024-04-24 16:17:03.025241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.025257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.025495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.025737] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.025770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.025787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.029327] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.038320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.038817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.039017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.039046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.039064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.039302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.039548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.039572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.039588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.043138] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.052162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.052590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.052835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.052867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.052886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.053124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.053365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.053389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.053405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.056957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.066166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.066601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.066796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.066837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.066854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.067088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.067338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.067362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.067378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.070931] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.080138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.080545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.080719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.080757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.080777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.081016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.081257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.081286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.081302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.084877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.094090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.094529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.094695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.094737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.094765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.095004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.095245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.095268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.844 [2024-04-24 16:17:03.095284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.844 [2024-04-24 16:17:03.098836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.844 [2024-04-24 16:17:03.108055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.844 [2024-04-24 16:17:03.108499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.108701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.844 [2024-04-24 16:17:03.108730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.844 [2024-04-24 16:17:03.108757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.844 [2024-04-24 16:17:03.108996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:01.844 [2024-04-24 16:17:03.109237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.844 [2024-04-24 16:17:03.109260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.845 [2024-04-24 16:17:03.109276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.845 [2024-04-24 16:17:03.112829] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.845 [2024-04-24 16:17:03.122062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.845 [2024-04-24 16:17:03.122477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.845 [2024-04-24 16:17:03.122675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.845 [2024-04-24 16:17:03.122701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:01.845 [2024-04-24 16:17:03.122718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:01.845 [2024-04-24 16:17:03.122976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.106 [2024-04-24 16:17:03.123218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.106 [2024-04-24 16:17:03.123241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.106 [2024-04-24 16:17:03.123263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.106 [2024-04-24 16:17:03.126818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.106 [2024-04-24 16:17:03.136040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.106 [2024-04-24 16:17:03.136481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.106 [2024-04-24 16:17:03.136677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.136710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.136753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.136994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.137234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.137258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.137274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.140831] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.150048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.150462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.150621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.150646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.150662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.150930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.151173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.151196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.151212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.154767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.163976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.164383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.164658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.164683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.164699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.164954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.165195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.165219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.165234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.168792] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.177797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.178295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.178487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.178513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.178529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.178792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.179033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.179056] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.179072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.182616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.191611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.192046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.192248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.192276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.192294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.192532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.192787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.192811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.192827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.196373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.205593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.206089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.206274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.206299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.206315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.206564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.206817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.206841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.206857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.210402] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.219416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.219868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.220071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.220099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.220118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.220356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.220596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.220620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.220636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.224190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.233402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.233897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.234148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.234194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.234212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.234450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.234690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.234713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.234729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.238277] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.247273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.247703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.247890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.247919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.247937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.248175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.107 [2024-04-24 16:17:03.248415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.107 [2024-04-24 16:17:03.248438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.107 [2024-04-24 16:17:03.248454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.107 [2024-04-24 16:17:03.252016] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.107 [2024-04-24 16:17:03.261227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.107 [2024-04-24 16:17:03.261661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.261851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.107 [2024-04-24 16:17:03.261881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.107 [2024-04-24 16:17:03.261899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.107 [2024-04-24 16:17:03.262137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.262377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.262400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.262417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.265970] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.275185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.275629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.275802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.275831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.275849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.276087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.276328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.276351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.276367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.279921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.289132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.289562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.289737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.289775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.289793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.290030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.290271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.290294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.290310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.293861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.303092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.303528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.303712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.303755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.303777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.304014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.304255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.304278] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.304294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.307850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.317065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.317531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.317705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.317733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.317762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.318001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.318241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.318264] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.318280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.321835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.331039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.331474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.331644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.331732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.331761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.332000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.332241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.332264] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.332280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.335834] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.345043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.345632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.345862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.345888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.345910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.346152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.346393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.346416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.346432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.349985] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.358991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.359421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.359598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.359627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.359645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.359892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.360133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.360156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.360172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.363716] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.372932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.373363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.373572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.373619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.373637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.373887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.374129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.374152] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.108 [2024-04-24 16:17:03.374168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.108 [2024-04-24 16:17:03.377719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.108 [2024-04-24 16:17:03.386934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.108 [2024-04-24 16:17:03.387343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.387550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.108 [2024-04-24 16:17:03.387596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.108 [2024-04-24 16:17:03.387615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.108 [2024-04-24 16:17:03.387870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.108 [2024-04-24 16:17:03.388111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.108 [2024-04-24 16:17:03.388135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.109 [2024-04-24 16:17:03.388151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.391699] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.400925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.368 [2024-04-24 16:17:03.401347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.401511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.401536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.368 [2024-04-24 16:17:03.401568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.368 [2024-04-24 16:17:03.401818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.368 [2024-04-24 16:17:03.402059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.368 [2024-04-24 16:17:03.402082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.368 [2024-04-24 16:17:03.402098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.405644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.414858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.368 [2024-04-24 16:17:03.415295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.415497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.415526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.368 [2024-04-24 16:17:03.415544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.368 [2024-04-24 16:17:03.415794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.368 [2024-04-24 16:17:03.416034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.368 [2024-04-24 16:17:03.416057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.368 [2024-04-24 16:17:03.416073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.419620] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.428834] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.368 [2024-04-24 16:17:03.429265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.429443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.429471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.368 [2024-04-24 16:17:03.429489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.368 [2024-04-24 16:17:03.429726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.368 [2024-04-24 16:17:03.429984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.368 [2024-04-24 16:17:03.430008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.368 [2024-04-24 16:17:03.430024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.433567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.442788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.368 [2024-04-24 16:17:03.443219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.443394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.443424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.368 [2024-04-24 16:17:03.443443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.368 [2024-04-24 16:17:03.443680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.368 [2024-04-24 16:17:03.443938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.368 [2024-04-24 16:17:03.443963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.368 [2024-04-24 16:17:03.443979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.447524] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.456759] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.368 [2024-04-24 16:17:03.457267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.457449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.368 [2024-04-24 16:17:03.457474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.368 [2024-04-24 16:17:03.457490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.368 [2024-04-24 16:17:03.457738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.368 [2024-04-24 16:17:03.457990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.368 [2024-04-24 16:17:03.458014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.368 [2024-04-24 16:17:03.458030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.368 [2024-04-24 16:17:03.461573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.368 [2024-04-24 16:17:03.470564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.471096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.471393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.471421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.471440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.471677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.471930] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.471960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.471976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.475526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.484528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.484964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.485270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.485323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.485341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.485579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.485832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.485856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.485872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.489417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.498414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.498848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.499029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.499069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.499085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.499338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.499580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.499604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.499620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.503177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.512406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.512813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.512994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.513023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.513041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.513279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.513520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.513542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.513564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.517123] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.526330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.526739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.526938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.526964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.526980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.527251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.527492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.527515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.527530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.531087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.540295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.540794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.540947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.540976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.540994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.541231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.541472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.541495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.541511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.545069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.554273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.554718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.554901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.554943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.554962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.555199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.555439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.555462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.555478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.559039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.568243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.568682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.568831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.568861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.568881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.569119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.569360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.569383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.569399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.572955] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.369 [2024-04-24 16:17:03.582172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.369 [2024-04-24 16:17:03.582577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.582727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.369 [2024-04-24 16:17:03.582768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.369 [2024-04-24 16:17:03.582788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.369 [2024-04-24 16:17:03.583025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.369 [2024-04-24 16:17:03.583266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.369 [2024-04-24 16:17:03.583289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.369 [2024-04-24 16:17:03.583306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.369 [2024-04-24 16:17:03.586866] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.370 [2024-04-24 16:17:03.596084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.370 [2024-04-24 16:17:03.596575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.596767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.596794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.370 [2024-04-24 16:17:03.596811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.370 [2024-04-24 16:17:03.597051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.370 [2024-04-24 16:17:03.597291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.370 [2024-04-24 16:17:03.597315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.370 [2024-04-24 16:17:03.597331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.370 [2024-04-24 16:17:03.600907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.370 [2024-04-24 16:17:03.609915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.370 [2024-04-24 16:17:03.610355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.610511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.610540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.370 [2024-04-24 16:17:03.610558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.370 [2024-04-24 16:17:03.610806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.370 [2024-04-24 16:17:03.611046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.370 [2024-04-24 16:17:03.611070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.370 [2024-04-24 16:17:03.611086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.370 [2024-04-24 16:17:03.614630] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.370 [2024-04-24 16:17:03.623843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.370 [2024-04-24 16:17:03.624254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.624447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.624473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.370 [2024-04-24 16:17:03.624504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.370 [2024-04-24 16:17:03.624737] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.370 [2024-04-24 16:17:03.624990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.370 [2024-04-24 16:17:03.625014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.370 [2024-04-24 16:17:03.625029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.370 [2024-04-24 16:17:03.628573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.370 [2024-04-24 16:17:03.637788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.370 [2024-04-24 16:17:03.638322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.638585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.638613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.370 [2024-04-24 16:17:03.638632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.370 [2024-04-24 16:17:03.638881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.370 [2024-04-24 16:17:03.639122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.370 [2024-04-24 16:17:03.639146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.370 [2024-04-24 16:17:03.639161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.370 [2024-04-24 16:17:03.642706] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.370 [2024-04-24 16:17:03.651704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.370 [2024-04-24 16:17:03.652155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.652320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.370 [2024-04-24 16:17:03.652361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.370 [2024-04-24 16:17:03.652380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.652617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.652871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.652895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.652911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.629 [2024-04-24 16:17:03.656458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.629 [2024-04-24 16:17:03.665669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.629 [2024-04-24 16:17:03.666134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.666362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.666392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.629 [2024-04-24 16:17:03.666410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.666648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.666901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.666925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.666941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.629 [2024-04-24 16:17:03.670490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.629 [2024-04-24 16:17:03.679497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.629 [2024-04-24 16:17:03.679906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.680157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.680220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.629 [2024-04-24 16:17:03.680239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.680476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.680717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.680740] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.680768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.629 [2024-04-24 16:17:03.684314] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.629 [2024-04-24 16:17:03.693310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.629 [2024-04-24 16:17:03.693758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.693949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.694001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.629 [2024-04-24 16:17:03.694020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.694257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.694497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.694520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.694536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.629 [2024-04-24 16:17:03.698092] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.629 [2024-04-24 16:17:03.707311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.629 [2024-04-24 16:17:03.707749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.707957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.707985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.629 [2024-04-24 16:17:03.708003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.708240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.708481] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.708504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.708520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.629 [2024-04-24 16:17:03.712076] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.629 [2024-04-24 16:17:03.721298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.629 [2024-04-24 16:17:03.721739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.721916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.629 [2024-04-24 16:17:03.721945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.629 [2024-04-24 16:17:03.721963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.629 [2024-04-24 16:17:03.722200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.629 [2024-04-24 16:17:03.722441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.629 [2024-04-24 16:17:03.722464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.629 [2024-04-24 16:17:03.722480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.726039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.735246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.735815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.736018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.736046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.736070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.736307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.736548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.736571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.736587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.740145] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.749142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.749554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.749720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.749759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.749779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.750017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.750257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.750280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.750297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.753854] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.763062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.763501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.763704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.763732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.763761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.763999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.764240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.764262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.764278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.767833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.777069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.777556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.777756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.777785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.777803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.778047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.778288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.778311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.778327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.781883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.790928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.791380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.791602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.791649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.791668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.791916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.792157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.792180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.792197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.795761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.804783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.805229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.805487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.805534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.805553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.805804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.806048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.806067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.806081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.809635] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.818647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.819095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.819336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.819382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.819401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.819638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.819898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.819922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.819939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.823489] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.832506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.832964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.833132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.833157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.833172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.833432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.833673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.833697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.833713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.837273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.846507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.846939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.847123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.847164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.630 [2024-04-24 16:17:03.847180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.630 [2024-04-24 16:17:03.847429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.630 [2024-04-24 16:17:03.847670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.630 [2024-04-24 16:17:03.847694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.630 [2024-04-24 16:17:03.847710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.630 [2024-04-24 16:17:03.851281] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.630 [2024-04-24 16:17:03.860461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.630 [2024-04-24 16:17:03.860888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.861047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.630 [2024-04-24 16:17:03.861074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.631 [2024-04-24 16:17:03.861090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.631 [2024-04-24 16:17:03.861338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.631 [2024-04-24 16:17:03.861579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.631 [2024-04-24 16:17:03.861608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.631 [2024-04-24 16:17:03.861625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.631 [2024-04-24 16:17:03.865222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.631 [2024-04-24 16:17:03.874466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.631 [2024-04-24 16:17:03.874922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.875093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.875121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.631 [2024-04-24 16:17:03.875139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.631 [2024-04-24 16:17:03.875376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.631 [2024-04-24 16:17:03.875617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.631 [2024-04-24 16:17:03.875640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.631 [2024-04-24 16:17:03.875657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.631 [2024-04-24 16:17:03.879189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.631 [2024-04-24 16:17:03.888354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.631 [2024-04-24 16:17:03.888843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.889020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.889048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.631 [2024-04-24 16:17:03.889066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.631 [2024-04-24 16:17:03.889304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.631 [2024-04-24 16:17:03.889544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.631 [2024-04-24 16:17:03.889567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.631 [2024-04-24 16:17:03.889583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.631 [2024-04-24 16:17:03.893132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.631 [2024-04-24 16:17:03.902143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.631 [2024-04-24 16:17:03.902591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.902763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.631 [2024-04-24 16:17:03.902792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.631 [2024-04-24 16:17:03.902809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.631 [2024-04-24 16:17:03.903042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.631 [2024-04-24 16:17:03.903263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.631 [2024-04-24 16:17:03.903284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.631 [2024-04-24 16:17:03.903305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.631 [2024-04-24 16:17:03.907046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.916173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.916692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.916917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.916945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.916961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.917218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.917459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.917482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.917499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.921097] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.930067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.930513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.930690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.930719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.930737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.930996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.931248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.931271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.931287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.934834] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.943997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.944525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.944724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.944761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.944798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.945026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.945279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.945302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.945318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.948839] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.957942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.958376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.958566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.958606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.958622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.958881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.959122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.959146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.959162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.962710] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.971926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.972357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.972566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.972594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.972613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.972862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.973103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.973126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.973142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.976697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.985917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:03.986354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.986572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:03.986616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:03.986635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:03.986883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:03.987124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:03.987148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:03.987164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:03.990709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:03.999939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:04.000539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.000757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.000789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:04.000809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:04.001053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:04.001295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:04.001318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:04.001335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:04.004893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:04.013893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:04.014339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.014536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.014577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:04.014594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:04.014861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:04.015103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:04.015127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:04.015144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:04.018687] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:04.027900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:04.028328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.028558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.890 [2024-04-24 16:17:04.028603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.890 [2024-04-24 16:17:04.028621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.890 [2024-04-24 16:17:04.028872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.890 [2024-04-24 16:17:04.029114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.890 [2024-04-24 16:17:04.029137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.890 [2024-04-24 16:17:04.029154] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.890 [2024-04-24 16:17:04.032696] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.890 [2024-04-24 16:17:04.041902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.890 [2024-04-24 16:17:04.042338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.042616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.042645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.042664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.042912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.043154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.043178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.043194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.046872] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.055871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.056365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.056540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.056579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.056597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.056857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.057099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.057123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.057139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.060686] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.069682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.070121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.070324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.070350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.070366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.070625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.070878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.070903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.070919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.074461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.083666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.084084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.084298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.084338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.084373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.084611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.084863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.084887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.084904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.088449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.097658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.098098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.098292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.098318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.098334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.098581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.098832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.098856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.098873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.102431] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.111639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.112085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.112262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.112290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.112309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.112546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.112798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.112822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.112839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.116382] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.125607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.126098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.126409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.126459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.126484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.126722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.126974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.126998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.127014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.130557] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.139559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.139981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.140169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.140209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.140225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.140471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.140713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.140736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.140763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.144306] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.153505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.153931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.154110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.154153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.154172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.154409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.891 [2024-04-24 16:17:04.154650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.891 [2024-04-24 16:17:04.154674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.891 [2024-04-24 16:17:04.154690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.891 [2024-04-24 16:17:04.158244] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.891 [2024-04-24 16:17:04.167446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.891 [2024-04-24 16:17:04.167863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.168062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.891 [2024-04-24 16:17:04.168091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:02.891 [2024-04-24 16:17:04.168109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:02.891 [2024-04-24 16:17:04.168353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:02.892 [2024-04-24 16:17:04.168594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.892 [2024-04-24 16:17:04.168617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.892 [2024-04-24 16:17:04.168633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.892 [2024-04-24 16:17:04.172189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.150 [2024-04-24 16:17:04.181411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.150 [2024-04-24 16:17:04.181848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.150 [2024-04-24 16:17:04.182003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.150 [2024-04-24 16:17:04.182032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.150 [2024-04-24 16:17:04.182050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.150 [2024-04-24 16:17:04.182288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.150 [2024-04-24 16:17:04.182528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.150 [2024-04-24 16:17:04.182551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.150 [2024-04-24 16:17:04.182568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.150 [2024-04-24 16:17:04.186120] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.150 [2024-04-24 16:17:04.195344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.150 [2024-04-24 16:17:04.195780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.150 [2024-04-24 16:17:04.195966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.150 [2024-04-24 16:17:04.195991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.150 [2024-04-24 16:17:04.196008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.150 [2024-04-24 16:17:04.196272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.150 [2024-04-24 16:17:04.196512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.150 [2024-04-24 16:17:04.196536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.150 [2024-04-24 16:17:04.196552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.150 [2024-04-24 16:17:04.200123] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.209333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.209758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.209995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.210023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.210042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.210280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.210526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.210550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.210566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.214119] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.223319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.223755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.223915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.223941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.223958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.224215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.224413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.224431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.224445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.227962] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.237162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.237596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.237828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.237854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.237871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.238112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.238325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.238344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.238358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.241854] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.251094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.251681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.251898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.251925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.251942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.252181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.252378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.252402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.252415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.255933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.265104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.265716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.265957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.265983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.265999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.266250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.266441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.266459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.266472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.269974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.278928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.279394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.279572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.279597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.279613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.279875] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.280087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.280106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.280119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.283586] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.292802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.293260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.293454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.293480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.293496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.293753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.293991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.294014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.294040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.297533] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.306721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.307125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.307329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.307369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.307385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.307619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.307859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.307879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.307893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.311388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.320544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.321036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.321295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.321320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.151 [2024-04-24 16:17:04.321337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.151 [2024-04-24 16:17:04.321587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.151 [2024-04-24 16:17:04.321790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.151 [2024-04-24 16:17:04.321810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.151 [2024-04-24 16:17:04.321823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.151 [2024-04-24 16:17:04.325315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.151 [2024-04-24 16:17:04.334463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.151 [2024-04-24 16:17:04.334925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.335136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.151 [2024-04-24 16:17:04.335162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.335179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.335439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.335631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.335649] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.335662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.339170] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.348329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.348748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.348950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.348976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.348993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.349233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.349465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.349485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.349498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.353031] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.362192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.362653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.362821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.362848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.362864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.363116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.363309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.363327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.363340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.366820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.376006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.376580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.376842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.376869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.376886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.377130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.377338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.377356] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.377369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.380840] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.389840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.390298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.390526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.390551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.390568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.390828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.391026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.391059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.391072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.394542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.403733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.404182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.404396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.404422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.404438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.404687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.404922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.404942] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.404955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.408448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.417629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.418075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.418268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.418293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.418310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.418560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.418776] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.418796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.418809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.152 [2024-04-24 16:17:04.422309] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.152 [2024-04-24 16:17:04.431477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.152 [2024-04-24 16:17:04.431958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.432129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.152 [2024-04-24 16:17:04.432155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.152 [2024-04-24 16:17:04.432172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.152 [2024-04-24 16:17:04.432411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.152 [2024-04-24 16:17:04.432618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.152 [2024-04-24 16:17:04.432651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.152 [2024-04-24 16:17:04.432665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.411 [2024-04-24 16:17:04.436183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.411 [2024-04-24 16:17:04.445344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.411 [2024-04-24 16:17:04.445791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.411 [2024-04-24 16:17:04.445944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.411 [2024-04-24 16:17:04.445970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.411 [2024-04-24 16:17:04.446001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.411 [2024-04-24 16:17:04.446233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.411 [2024-04-24 16:17:04.446424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.411 [2024-04-24 16:17:04.446443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.411 [2024-04-24 16:17:04.446455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.449956] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.459326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.459771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.459929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.459969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.459986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.460223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.460415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.460433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.460446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.463940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.473321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.473696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.473939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.473970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.473987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.474250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.474441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.474460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.474472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.477992] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.487148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.487752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.487969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.488012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.488029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.488264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.488473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.488491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.488504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.492015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.500972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.501426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.501611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.501636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.501653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.501913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.502132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.502151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.502164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.505640] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.514828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.515494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.515706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.515734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.515779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.516039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.516247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.516267] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.516279] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.519767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.528733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.529128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.529375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.529400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.529415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.529639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.529859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.529879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.529892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.533384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.542545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.542985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.543184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.543210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.543226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.543479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.543732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.543790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.543805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.547303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.556459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.556907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.557093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.557119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.557135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.557393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.557585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.557603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.557616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.561131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.570294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.570799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.570973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.570998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.412 [2024-04-24 16:17:04.571015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.412 [2024-04-24 16:17:04.571268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.412 [2024-04-24 16:17:04.571461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.412 [2024-04-24 16:17:04.571479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.412 [2024-04-24 16:17:04.571491] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.412 [2024-04-24 16:17:04.574998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.412 [2024-04-24 16:17:04.584146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.412 [2024-04-24 16:17:04.584587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.412 [2024-04-24 16:17:04.584769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.584803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.584823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.585074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.585265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.585283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.585296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.588779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.597962] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.598406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.598595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.598621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.598638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.598890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.599099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.599120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.599133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.602579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.611900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.612328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.612496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.612522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.612539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.612787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.612992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.613011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.613040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.616235] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.625839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.626303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.626487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.626513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.626530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.626789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.626987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.627006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.627019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.630502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.639666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.640131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.640347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.640373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.640390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.640653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.640875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.640900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.640914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.644408] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.653594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.654028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.654231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.654257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.654289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.654536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.654727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.654770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.654785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.658290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.667449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.667834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.668058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.668084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.668101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.668358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.668565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.668584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.668596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.672109] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.681282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.681702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.681925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.681952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.681969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.413 [2024-04-24 16:17:04.682215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.413 [2024-04-24 16:17:04.682407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.413 [2024-04-24 16:17:04.682425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.413 [2024-04-24 16:17:04.682443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.413 [2024-04-24 16:17:04.685986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.413 [2024-04-24 16:17:04.695147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.413 [2024-04-24 16:17:04.695551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.695776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.413 [2024-04-24 16:17:04.695803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.413 [2024-04-24 16:17:04.695820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.696034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.696271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.696291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.696305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.699827] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.709020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.709412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.709634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.709659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.709676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.709930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.710161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.710180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.710193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.713667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.722854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.723526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.723774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.723804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.723821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.724060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.724269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.724288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.724301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.727797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.736784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.737439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.737655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.737682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.737700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.737951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.738162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.738181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.738194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.741671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.750679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.751125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.751313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.751339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.751356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.751617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.751840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.751861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.751874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.755370] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.764530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.764957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.765132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.765158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.765174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.765429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.765620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.765639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.765652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.769168] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.778544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.778950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.779150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.779176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.779208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.779440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.779646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.779665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.779678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.783188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.792354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.792803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.792983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.793009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.793025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.793280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.793472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.793490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.674 [2024-04-24 16:17:04.793503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.674 [2024-04-24 16:17:04.797011] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.674 [2024-04-24 16:17:04.806179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.674 [2024-04-24 16:17:04.806624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.806785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.674 [2024-04-24 16:17:04.806811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.674 [2024-04-24 16:17:04.806828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.674 [2024-04-24 16:17:04.807067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.674 [2024-04-24 16:17:04.807259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.674 [2024-04-24 16:17:04.807277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.807290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.810773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.820017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.820466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.820674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.820699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.820716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.820975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.821202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.821221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.821234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.824711] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3474947 Killed "${NVMF_APP[@]}" "$@" 00:21:03.675 16:17:04 -- host/bdevperf.sh@36 -- # tgt_init 00:21:03.675 16:17:04 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:03.675 16:17:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:03.675 16:17:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:03.675 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:03.675 [2024-04-24 16:17:04.833929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.834345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.834528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.834556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.834574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.834822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.835065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.835088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.835105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 16:17:04 -- nvmf/common.sh@470 -- # nvmfpid=3475908 00:21:03.675 16:17:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:03.675 16:17:04 -- nvmf/common.sh@471 -- # waitforlisten 3475908 00:21:03.675 16:17:04 -- common/autotest_common.sh@817 -- # '[' -z 3475908 ']' 00:21:03.675 16:17:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.675 16:17:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.675 16:17:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.675 16:17:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.675 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:03.675 [2024-04-24 16:17:04.838652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.847873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.848313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.848492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.848527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.848546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.848793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.849035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.849059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.849074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.852623] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.861844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.862259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.862435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.862464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.862482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.862720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.862970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.862994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.863011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.866558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.875789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.876214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.876393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.876421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.876440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.876678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.876929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.876953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.876970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.880518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.883154] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:21:03.675 [2024-04-24 16:17:04.883222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.675 [2024-04-24 16:17:04.889137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.889517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.889715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.889747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.889765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.889994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.890210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.890229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.890243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.893247] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.902384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.902903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.903046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.903071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.675 [2024-04-24 16:17:04.903088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.675 [2024-04-24 16:17:04.903340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.675 [2024-04-24 16:17:04.903537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.675 [2024-04-24 16:17:04.903556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.675 [2024-04-24 16:17:04.903569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.675 [2024-04-24 16:17:04.906538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.675 [2024-04-24 16:17:04.915568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.675 [2024-04-24 16:17:04.916055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.675 [2024-04-24 16:17:04.916215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.916240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.676 [2024-04-24 16:17:04.916257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.676 [2024-04-24 16:17:04.916497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.676 [2024-04-24 16:17:04.916710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.676 [2024-04-24 16:17:04.916752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.676 [2024-04-24 16:17:04.916768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.676 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.676 [2024-04-24 16:17:04.919706] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.676 [2024-04-24 16:17:04.929527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.676 [2024-04-24 16:17:04.929983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.930181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.930207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.676 [2024-04-24 16:17:04.930224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.676 [2024-04-24 16:17:04.930465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.676 [2024-04-24 16:17:04.930678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.676 [2024-04-24 16:17:04.930697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.676 [2024-04-24 16:17:04.930710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.676 [2024-04-24 16:17:04.934245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.676 [2024-04-24 16:17:04.943431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.676 [2024-04-24 16:17:04.943858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.944032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.944059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.676 [2024-04-24 16:17:04.944075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.676 [2024-04-24 16:17:04.944330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.676 [2024-04-24 16:17:04.944583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.676 [2024-04-24 16:17:04.944606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.676 [2024-04-24 16:17:04.944622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.676 [2024-04-24 16:17:04.948228] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.676 [2024-04-24 16:17:04.952921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.676 [2024-04-24 16:17:04.957297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.676 [2024-04-24 16:17:04.957767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.957953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.676 [2024-04-24 16:17:04.957979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.676 [2024-04-24 16:17:04.957996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:04.958239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:04.958444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:04.958464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:04.958478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:04.961977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:04.971225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:04.971732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.971954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.971993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:04.972013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:04.972275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:04.972477] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:04.972496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:04.972511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:04.975977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:04.985175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:04.985619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.985825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.985852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:04.985869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:04.986112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:04.986326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:04.986346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:04.986360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:04.989819] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:04.999004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:04.999395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.999614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:04.999640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:04.999657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:04.999908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:05.000148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:05.000169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:05.000183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:05.003667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:05.012881] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:05.013374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.013531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.013557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:05.013582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:05.013845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:05.014104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:05.014129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:05.014146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:05.017694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:05.026909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:05.027492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.027700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.027729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:05.027760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:05.028017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:05.028281] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:05.028307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:05.028326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:05.031874] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:05.040827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:05.041269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.041438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.041466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:05.041498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:05.041756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:05.041983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:05.042004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:05.042018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:05.045516] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:05.054557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:05.054979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.055180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.055206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:05.055223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.936 [2024-04-24 16:17:05.055486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.936 [2024-04-24 16:17:05.055728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.936 [2024-04-24 16:17:05.055762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.936 [2024-04-24 16:17:05.055779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.936 [2024-04-24 16:17:05.059258] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.936 [2024-04-24 16:17:05.068218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.936 [2024-04-24 16:17:05.068543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.936 [2024-04-24 16:17:05.068590] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.936 [2024-04-24 16:17:05.068606] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.936 [2024-04-24 16:17:05.068619] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.936 [2024-04-24 16:17:05.068630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.936 [2024-04-24 16:17:05.068625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.068699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.936 [2024-04-24 16:17:05.068761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.936 [2024-04-24 16:17:05.068765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.936 [2024-04-24 16:17:05.068874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.936 [2024-04-24 16:17:05.068901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.936 [2024-04-24 16:17:05.068918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.069133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.069361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.069383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.069398] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.072541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.081672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.082253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.082451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.082478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.082498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.082735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.082982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.083005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.083022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.086236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.095142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.095724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.095916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.095943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.095963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.096202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.096418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.096439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.096456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.099606] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.108689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.109211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.109389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.109416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.109435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.109659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.109890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.109913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.109930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.113049] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.122251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.122729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.122928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.122955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.122974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.123213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.123428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.123449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.123466] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.126614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.135754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.136334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.136558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.136585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.136605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.136841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.137077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.137098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.137115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.140258] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.149332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.149828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.150009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.150036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.150055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.150291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.150505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.150526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.150543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.153684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.162756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.163217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.163414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.163440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.163457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.163672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.163928] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.163951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.163966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.167190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 [2024-04-24 16:17:05.176357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.176768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.176921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.176947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.176964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.937 [2024-04-24 16:17:05.177177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.937 [2024-04-24 16:17:05.177394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.937 [2024-04-24 16:17:05.177416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.937 [2024-04-24 16:17:05.177430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.937 [2024-04-24 16:17:05.180637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.937 16:17:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:03.937 16:17:05 -- common/autotest_common.sh@850 -- # return 0 00:21:03.937 16:17:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:03.937 16:17:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:03.937 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:03.937 [2024-04-24 16:17:05.189960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.937 [2024-04-24 16:17:05.190312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.190490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.937 [2024-04-24 16:17:05.190517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.937 [2024-04-24 16:17:05.190534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.938 [2024-04-24 16:17:05.190758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.938 [2024-04-24 16:17:05.190976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.938 [2024-04-24 16:17:05.190997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.938 [2024-04-24 16:17:05.191012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.938 [2024-04-24 16:17:05.194230] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.938 [2024-04-24 16:17:05.203491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.938 [2024-04-24 16:17:05.203908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.938 [2024-04-24 16:17:05.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.938 [2024-04-24 16:17:05.204107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.938 [2024-04-24 16:17:05.204123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.938 [2024-04-24 16:17:05.204352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.938 [2024-04-24 16:17:05.204563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.938 [2024-04-24 16:17:05.204583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.938 [2024-04-24 16:17:05.204597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.938 [2024-04-24 16:17:05.207809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.938 16:17:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.938 16:17:05 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.938 16:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.938 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:03.938 [2024-04-24 16:17:05.215078] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.938 [2024-04-24 16:17:05.217078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.938 [2024-04-24 16:17:05.217467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.938 [2024-04-24 16:17:05.217593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.938 [2024-04-24 16:17:05.217619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:03.938 [2024-04-24 16:17:05.217636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:03.938 [2024-04-24 16:17:05.217860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:03.938 [2024-04-24 16:17:05.218081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.938 [2024-04-24 16:17:05.218102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.938 [2024-04-24 16:17:05.218117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.197 16:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.197 16:17:05 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:04.197 16:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.197 [2024-04-24 16:17:05.221398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.197 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:04.197 [2024-04-24 16:17:05.230604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.197 [2024-04-24 16:17:05.230988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.231152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.231178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:04.197 [2024-04-24 16:17:05.231194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:04.197 [2024-04-24 16:17:05.231431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:04.197 [2024-04-24 16:17:05.231636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.197 [2024-04-24 16:17:05.231656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.197 [2024-04-24 16:17:05.231669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.197 [2024-04-24 16:17:05.234859] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.197 [2024-04-24 16:17:05.244226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.197 [2024-04-24 16:17:05.244656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.244791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.244818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:04.197 [2024-04-24 16:17:05.244835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:04.197 [2024-04-24 16:17:05.245074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:04.197 [2024-04-24 16:17:05.245294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.197 [2024-04-24 16:17:05.245315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.197 [2024-04-24 16:17:05.245329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.197 [2024-04-24 16:17:05.248554] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.197 [2024-04-24 16:17:05.257684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.197 [2024-04-24 16:17:05.258267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.258494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.197 [2024-04-24 16:17:05.258520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:04.197 [2024-04-24 16:17:05.258540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:04.197 [2024-04-24 16:17:05.258774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:04.197 [2024-04-24 16:17:05.259002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.197 [2024-04-24 16:17:05.259024] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.197 [2024-04-24 16:17:05.259056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.197 Malloc0 00:21:04.197 [2024-04-24 16:17:05.262268] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.197 16:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.197 16:17:05 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.197 16:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.197 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:04.197 16:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.198 16:17:05 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:04.198 16:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.198 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:04.198 [2024-04-24 16:17:05.271359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.198 [2024-04-24 16:17:05.271795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.198 [2024-04-24 16:17:05.271943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.198 [2024-04-24 16:17:05.271969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b6160 with addr=10.0.0.2, port=4420 00:21:04.198 [2024-04-24 16:17:05.271986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6160 is same with the state(5) to be set 00:21:04.198 [2024-04-24 16:17:05.272201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b6160 (9): Bad file descriptor 00:21:04.198 [2024-04-24 16:17:05.272433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.198 [2024-04-24 16:17:05.272454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.198 [2024-04-24 16:17:05.272468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.198 [2024-04-24 16:17:05.275669] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.198 16:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.198 16:17:05 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.198 16:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.198 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:04.198 [2024-04-24 16:17:05.281765] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.198 [2024-04-24 16:17:05.284983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.198 16:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.198 16:17:05 -- host/bdevperf.sh@38 -- # wait 3475238 00:21:04.198 [2024-04-24 16:17:05.319248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:14.169 00:21:14.169 Latency(us) 00:21:14.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.169 Verification LBA range: start 0x0 length 0x4000 00:21:14.169 Nvme1n1 : 15.01 6616.59 25.85 8859.10 0.00 8245.75 849.54 19223.89 00:21:14.169 =================================================================================================================== 00:21:14.169 Total : 6616.59 25.85 8859.10 0.00 8245.75 849.54 19223.89 00:21:14.169 16:17:14 -- host/bdevperf.sh@39 -- # sync 00:21:14.169 16:17:14 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.169 16:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.169 16:17:14 -- common/autotest_common.sh@10 -- # set +x 00:21:14.169 16:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.169 16:17:14 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:14.169 16:17:14 -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:14.169 16:17:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:14.169 16:17:14 -- nvmf/common.sh@117 -- # sync 00:21:14.169 16:17:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.169 16:17:14 -- nvmf/common.sh@120 -- # set +e 00:21:14.169 16:17:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.169 16:17:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.169 rmmod nvme_tcp 00:21:14.169 rmmod nvme_fabrics 00:21:14.169 rmmod nvme_keyring 00:21:14.169 16:17:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.169 16:17:14 -- nvmf/common.sh@124 -- # set -e 00:21:14.169 16:17:14 -- nvmf/common.sh@125 -- # return 0 00:21:14.169 16:17:14 -- nvmf/common.sh@478 -- # '[' -n 3475908 ']' 00:21:14.169 16:17:14 -- nvmf/common.sh@479 -- # killprocess 3475908 00:21:14.169 16:17:14 -- common/autotest_common.sh@936 -- # '[' -z 3475908 ']' 00:21:14.169 16:17:14 -- common/autotest_common.sh@940 -- # kill -0 3475908 00:21:14.169 16:17:14 -- common/autotest_common.sh@941 -- # uname 00:21:14.169 16:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.169 16:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3475908 00:21:14.169 16:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:14.169 16:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:14.169 16:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3475908' 00:21:14.169 killing process with pid 3475908 00:21:14.169 16:17:14 -- common/autotest_common.sh@955 -- # kill 3475908 00:21:14.169 16:17:14 -- common/autotest_common.sh@960 -- # wait 3475908 00:21:14.169 16:17:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:14.169 16:17:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:14.169 16:17:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:14.169 16:17:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.169 16:17:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.169 16:17:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.169 16:17:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.169 16:17:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.073 16:17:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.073 00:21:16.073 real 0m22.459s 00:21:16.073 user 1m0.286s 00:21:16.073 sys 0m4.152s 00:21:16.073 16:17:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:16.073 16:17:17 -- common/autotest_common.sh@10 -- # set +x 00:21:16.073 ************************************ 00:21:16.073 END TEST nvmf_bdevperf 00:21:16.073 ************************************ 00:21:16.073 16:17:17 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:16.073 16:17:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.073 16:17:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.073 16:17:17 -- common/autotest_common.sh@10 -- # set +x 00:21:16.073 ************************************ 00:21:16.073 START TEST nvmf_target_disconnect 00:21:16.073 ************************************ 00:21:16.073 16:17:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:16.073 * Looking for test storage... 00:21:16.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.073 16:17:17 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.073 16:17:17 -- nvmf/common.sh@7 -- # uname -s 00:21:16.073 16:17:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.073 16:17:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.073 16:17:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.073 16:17:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.073 16:17:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.073 16:17:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.073 16:17:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.073 16:17:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.073 16:17:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.073 16:17:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.073 16:17:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.073 16:17:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.073 16:17:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.073 16:17:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.073 16:17:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.073 16:17:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.073 16:17:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.073 16:17:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.073 16:17:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.073 16:17:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.073 16:17:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.074 16:17:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.074 16:17:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.074 16:17:17 -- paths/export.sh@5 -- # export PATH 00:21:16.074 16:17:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.074 16:17:17 -- nvmf/common.sh@47 -- # : 0 00:21:16.074 16:17:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.074 16:17:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.074 16:17:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.074 16:17:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.074 16:17:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.074 16:17:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.074 16:17:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.074 16:17:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.074 16:17:17 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:16.074 16:17:17 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:16.074 16:17:17 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:16.074 16:17:17 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:21:16.074 16:17:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:16.074 16:17:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.074 16:17:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:16.074 16:17:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:16.074 16:17:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:16.074 16:17:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.074 16:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.074 16:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.074 16:17:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:16.074 16:17:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:16.074 16:17:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.074 16:17:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.977 16:17:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:17.977 16:17:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.977 16:17:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.977 16:17:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.977 16:17:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.977 16:17:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.977 16:17:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.977 16:17:19 -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.977 16:17:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.977 16:17:19 -- nvmf/common.sh@296 -- # e810=() 00:21:17.977 16:17:19 -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.977 16:17:19 -- nvmf/common.sh@297 -- # x722=() 00:21:17.977 16:17:19 -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.977 16:17:19 -- nvmf/common.sh@298 -- # mlx=() 00:21:17.977 16:17:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.977 16:17:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.977 16:17:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.977 16:17:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.977 16:17:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.977 16:17:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.977 16:17:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.977 16:17:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.977 16:17:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.977 16:17:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:17.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:17.977 16:17:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.977 16:17:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.978 16:17:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:17.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:17.978 16:17:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.978 16:17:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.978 16:17:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.978 16:17:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.978 16:17:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.978 16:17:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:17.978 Found net devices under 0000:09:00.0: cvl_0_0 00:21:17.978 16:17:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.978 16:17:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.978 16:17:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.978 16:17:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.978 16:17:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.978 16:17:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:17.978 Found net devices under 0000:09:00.1: cvl_0_1 00:21:17.978 16:17:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.978 16:17:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:17.978 16:17:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:17.978 16:17:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:17.978 16:17:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:17.978 16:17:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.978 16:17:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.978 16:17:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.978 16:17:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.978 16:17:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.978 16:17:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.978 16:17:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.978 16:17:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.978 16:17:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.978 16:17:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.978 16:17:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.978 16:17:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.978 16:17:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.236 16:17:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.237 16:17:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.237 16:17:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.237 16:17:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.237 16:17:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.237 16:17:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.237 16:17:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:18.237 00:21:18.237 --- 10.0.0.2 ping statistics --- 00:21:18.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.237 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:18.237 16:17:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:18.237 00:21:18.237 --- 10.0.0.1 ping statistics --- 00:21:18.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.237 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:18.237 16:17:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.237 16:17:19 -- nvmf/common.sh@411 -- # return 0 00:21:18.237 16:17:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:18.237 16:17:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.237 16:17:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:18.237 16:17:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:18.237 16:17:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.237 16:17:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:18.237 16:17:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:18.237 16:17:19 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:18.237 16:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:18.237 16:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:18.237 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.237 ************************************ 00:21:18.237 START TEST nvmf_target_disconnect_tc1 00:21:18.237 ************************************ 00:21:18.237 16:17:19 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:21:18.237 16:17:19 -- host/target_disconnect.sh@32 -- # set +e 00:21:18.237 16:17:19 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.496 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.496 [2024-04-24 16:17:19.576426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.496 [2024-04-24 16:17:19.576752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.496 [2024-04-24 16:17:19.576805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ad0 with addr=10.0.0.2, port=4420 00:21:18.496 [2024-04-24 16:17:19.576843] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:18.496 [2024-04-24 16:17:19.576871] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:18.496 [2024-04-24 16:17:19.576885] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:18.496 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:21:18.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:18.496 Initializing NVMe Controllers 00:21:18.496 16:17:19 -- host/target_disconnect.sh@33 -- # trap - ERR 00:21:18.496 16:17:19 -- host/target_disconnect.sh@33 -- # print_backtrace 00:21:18.496 16:17:19 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:21:18.496 16:17:19 -- common/autotest_common.sh@1139 -- # return 0 00:21:18.496 16:17:19 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:21:18.496 16:17:19 -- host/target_disconnect.sh@41 -- # set -e 00:21:18.496 00:21:18.496 real 0m0.096s 00:21:18.496 user 0m0.047s 00:21:18.496 sys 0m0.048s 00:21:18.496 16:17:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:18.496 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.496 ************************************ 00:21:18.496 END TEST nvmf_target_disconnect_tc1 00:21:18.496 ************************************ 00:21:18.496 16:17:19 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:18.496 16:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:18.496 16:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:18.496 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.496 ************************************ 00:21:18.496 START TEST nvmf_target_disconnect_tc2 00:21:18.496 ************************************ 00:21:18.496 16:17:19 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:21:18.496 16:17:19 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:21:18.496 16:17:19 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:18.496 16:17:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:18.496 16:17:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:18.496 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.496 16:17:19 -- nvmf/common.sh@470 -- # nvmfpid=3479097 00:21:18.496 16:17:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:18.496 16:17:19 -- nvmf/common.sh@471 -- # waitforlisten 3479097 00:21:18.496 16:17:19 -- common/autotest_common.sh@817 -- # '[' -z 3479097 ']' 00:21:18.496 16:17:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.496 16:17:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.496 16:17:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.496 16:17:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.496 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.496 [2024-04-24 16:17:19.748192] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:21:18.496 [2024-04-24 16:17:19.748266] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.755 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.756 [2024-04-24 16:17:19.813905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.756 [2024-04-24 16:17:19.918807] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.756 [2024-04-24 16:17:19.918864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.756 [2024-04-24 16:17:19.918877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.756 [2024-04-24 16:17:19.918888] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.756 [2024-04-24 16:17:19.918899] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.756 [2024-04-24 16:17:19.918996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:18.756 [2024-04-24 16:17:19.919060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:18.756 [2024-04-24 16:17:19.919126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:18.756 [2024-04-24 16:17:19.919129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:19.015 16:17:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:19.015 16:17:20 -- common/autotest_common.sh@850 -- # return 0 00:21:19.015 16:17:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:19.015 16:17:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 16:17:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.015 16:17:20 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 Malloc0 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 [2024-04-24 16:17:20.090396] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 [2024-04-24 16:17:20.118650] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:19.015 16:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.015 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 16:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.015 16:17:20 -- host/target_disconnect.sh@50 -- # reconnectpid=3479220 00:21:19.015 16:17:20 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.015 16:17:20 -- host/target_disconnect.sh@52 -- # sleep 2 00:21:19.015 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.920 16:17:22 -- host/target_disconnect.sh@53 -- # kill -9 3479097 00:21:20.920 16:17:22 -- host/target_disconnect.sh@55 -- # sleep 2 00:21:20.920 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 [2024-04-24 16:17:22.142925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 [2024-04-24 16:17:22.143288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 [2024-04-24 16:17:22.143618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Write completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.921 Read completed with error (sct=0, sc=8) 00:21:20.921 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Write completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 Read completed with error (sct=0, sc=8) 00:21:20.922 starting I/O failed 00:21:20.922 [2024-04-24 16:17:22.143951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.922 [2024-04-24 16:17:22.144204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.144372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.144416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.144660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.144890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.144918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.145067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.145265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.145291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.145460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.145670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.145699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.145898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.146269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.146700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.146892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.147032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.147378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.147729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.147933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.148109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.148328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.148375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.148556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.148749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.148777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.148919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.149270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.149631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.149820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.149992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.150145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.150188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.150437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.150641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.150672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.150859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.151212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.151559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.151762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.151953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.152201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.152231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.152415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.152636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.152666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.152865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.152995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.153041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.922 [2024-04-24 16:17:22.153240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.153401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.922 [2024-04-24 16:17:22.153428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.922 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.153587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.153754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.153793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.153938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.154257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.154609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.154807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.155022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.155454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.155817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.155987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.156189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.156447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.156493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.156765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.156921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.156947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.157105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.157264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.157291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.157502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.157700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.157727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.157876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.158242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.158647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.158862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.159036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.159197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.159238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.159474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.159631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.159657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.159867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.160353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.160696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.160934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.161088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.161278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.161304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.161456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.161586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.161612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.161813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.161975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.162003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.162197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.162350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.162376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.162535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.162713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.162748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.162928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.163684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.163890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.164052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.164284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.164311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.923 qpair failed and we were unable to recover it. 00:21:20.923 [2024-04-24 16:17:22.164475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.164705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.923 [2024-04-24 16:17:22.164731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.164918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.165352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.165727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.165918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.166080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.166290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.166336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.166543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.166714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.166749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.166897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.167225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.167615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.167853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.168041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.168357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.168750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.168957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.169136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.169313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.169343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.169523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.169690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.169719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.169930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.170300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.170703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.170975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.171115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.171297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.171323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.171463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.171680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.171709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.171919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.172289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.172707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.172922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.173122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.173303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.173329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.173485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.173649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.173676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.173878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.174227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.174621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.174785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.174939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.175322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.924 [2024-04-24 16:17:22.175750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.924 [2024-04-24 16:17:22.175913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.924 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.176176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.176363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.176410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.176562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.176722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.176755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.176943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.177268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.177637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.177821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.177948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.178372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.178728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.178932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.179113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.179268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.179297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.179464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.179623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.179650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.179853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.180226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.180605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.180793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.180956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.181249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.181628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.181850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.182015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.182372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.182716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.182885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.183083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.183428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.183750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.183942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.925 [2024-04-24 16:17:22.184102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.184267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.925 [2024-04-24 16:17:22.184294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.925 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.184456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.184672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.184699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.184883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.185275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.185653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.185874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.186009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.186415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.186963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.187124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.187310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.187336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.187487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.187650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.187676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.187835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.188221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.188589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.188792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.188964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.189404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.189795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.189959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.190121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.190304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.190331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.190517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.190712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.190751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.190953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.191324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.191707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.191932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.192081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.192435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.192756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.192918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.193092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.193256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.193282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.193421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.193637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.193666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.193865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.194253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.194661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.194878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.926 [2024-04-24 16:17:22.195077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.195260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.926 [2024-04-24 16:17:22.195288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.926 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.195471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.195610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.195636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.195773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.195925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.195952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.196109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.196269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.196296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.196513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.196666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.196692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.196874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.197228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.197564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.197725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.197893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.198262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.198688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.198855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.199021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.199349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.199704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.199906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.200077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.200290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.200316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.200458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.200627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.200653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.200825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.200986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.201012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.201177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.201338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.201380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.201576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.201713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.201740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.201908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.202070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.202096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.202256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.202393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.927 [2024-04-24 16:17:22.202420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:20.927 qpair failed and we were unable to recover it. 00:21:20.927 [2024-04-24 16:17:22.202553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.202684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.202710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.202882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.203225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.203570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.203757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.203962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.204303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.204771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.204979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.205135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.205296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.205322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.205475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.205625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.205651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.205837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.206188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.199 [2024-04-24 16:17:22.206532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.199 [2024-04-24 16:17:22.206735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.199 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.206901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.207213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.207624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.207837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.207997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.208184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.208211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.208394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.208600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.208629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.208799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.208991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.209018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.209143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.209313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.209342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.209527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.209700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.209729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.209922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.210312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.210739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.210929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.211061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.211276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.211304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.211514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.211669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.211695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.211874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.212172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.212454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.212803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.212983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.213142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.213320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.213350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.213550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.213724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.213762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.213944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.214247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.214641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.214857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.214984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.215348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.215706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.215883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.216084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.216434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.216748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.216954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.200 qpair failed and we were unable to recover it. 00:21:21.200 [2024-04-24 16:17:22.217140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.217276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.200 [2024-04-24 16:17:22.217318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.217462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.217614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.217643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.217834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.217986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.218015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.218190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.218361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.218390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.218561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.218760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.218789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.218946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.219318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.219677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.219909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.220079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.220234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.220260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.220480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.220631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.220657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.220832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.221225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.221632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.221873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.222057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.222263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.222433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.222614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.222640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.222823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.223252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.223677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.223897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.224062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.224265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.224294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.224476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.224660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.224704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.224878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.225290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.225659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.225862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.226031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.226332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.226387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.226566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.226695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.226721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.201 qpair failed and we were unable to recover it. 00:21:21.201 [2024-04-24 16:17:22.226865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.201 [2024-04-24 16:17:22.227018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.227044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.227237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.227425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.227451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.227609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.227791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.227818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.228019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.228206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.228233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.228414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.228597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.228626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.228825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.228960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.229002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.229178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.229355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.229381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.229539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.229763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.229793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.229977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.230253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.230311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.230519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.230654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.230696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.230885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.231285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.231633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.231854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.232006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.232396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.232764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.232960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.233130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.233413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.233442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.233613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.233793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.233821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.233981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.234263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.234621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.234929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.202 qpair failed and we were unable to recover it. 00:21:21.202 [2024-04-24 16:17:22.235139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.202 [2024-04-24 16:17:22.235297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.235323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.235458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.235617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.235660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.235835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.236246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.236701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.236897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.237028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.237385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.237721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.237951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.238139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.238296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.238338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.238549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.238728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.238765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.238955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.239341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.239683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.239887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.240068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.240270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.240340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.240540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.240724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.240756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.240938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.241323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.241667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.241850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.241989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.242368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.203 qpair failed and we were unable to recover it. 00:21:21.203 [2024-04-24 16:17:22.242786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.203 [2024-04-24 16:17:22.242939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.243094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.243227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.243253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.243417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.243610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.243639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.243847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.244220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.244581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.244792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.244948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.245127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.245154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.245505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.245701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.245730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.245950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.246339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.246678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.246865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.247003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.247216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.247245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.247415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.247588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.247617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.247817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.247987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.248013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.248218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.248364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.248393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.248584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.248767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.248795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.248972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.249401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.249782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.249988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.250235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.250404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.250430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.204 qpair failed and we were unable to recover it. 00:21:21.204 [2024-04-24 16:17:22.250587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.250739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.204 [2024-04-24 16:17:22.250773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.250964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.251125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.251152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.251355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.251613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.251669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.251871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.252262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.252618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.252827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.253013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.253385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.253786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.253987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.254170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.254346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.254376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.254578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.254776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.254806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.255068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.255409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.255473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.255675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.255848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.255877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.256054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.256234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.256260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.205 qpair failed and we were unable to recover it. 00:21:21.205 [2024-04-24 16:17:22.256403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.256562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.205 [2024-04-24 16:17:22.256588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.256774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.256933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.256959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.257123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.257296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.257325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.257497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.257670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.257699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.257882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.258018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.258060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.258383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.258611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.258638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.258817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.258977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.259006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.259209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.259348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.259375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.259533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.259751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.259781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.259955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.260334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.260695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.260907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.261095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.261258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.261285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.261513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.261674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.261701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.261894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.262228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.262557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.262781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.262975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.263352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.263719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.263889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.206 [2024-04-24 16:17:22.264052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.264234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.206 [2024-04-24 16:17:22.264262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.206 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.264475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.264692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.264719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.264936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.265418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.265798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.265996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.266152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.266311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.266337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.266492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.266679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.266709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.266903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.267245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.267648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.267841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.267998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.268372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.268719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.268941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.269126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.269332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.269358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.269540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.269720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.269754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.269908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.270271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.270671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.270873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.271074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.271386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.271443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.271619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.271767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.271811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.271989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.272208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.272276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.272493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.272650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.272676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.272851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.273177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.273556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.273797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.273956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.274134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.274160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.207 qpair failed and we were unable to recover it. 00:21:21.207 [2024-04-24 16:17:22.274316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.274443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.207 [2024-04-24 16:17:22.274470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.274671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.274831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.274859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.275010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.275225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.275286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.275567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.275747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.275777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.275954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.276311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.276752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.276933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.277068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.277315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.277344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.277545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.277731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.277781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.277997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.278161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.278187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.278392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.278642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.278669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.278857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.278993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.279019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.279159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.279317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.279343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.279505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.279707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.279736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.279896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.280268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.280647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.280841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.280992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.281145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.281186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.281366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.281538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.281567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.281774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.282271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.282642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.282836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.283027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.283246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.283272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.283485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.283659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.283689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.283877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.284251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.284571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.284793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.284957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.285124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.285149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.285321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.285503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.285529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.208 qpair failed and we were unable to recover it. 00:21:21.208 [2024-04-24 16:17:22.285692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.208 [2024-04-24 16:17:22.285892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.285922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.286128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.286313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.286339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.286500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.286657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.286683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.286911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.287255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.287605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.287818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.288002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.288325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.288381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.288577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.288753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.288783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.288964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.289308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.289670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.289857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.290070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.290416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.290736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.290987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.291245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.291428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.291455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.291612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.291750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.291777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.291904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.292219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.292550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.292730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.292923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.293348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.293701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.293961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.294097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.294398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.294452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.294657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.294820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.294847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.295010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.295367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.295731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.295925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.296066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.296306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.296363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.296574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.296737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.296770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.209 qpair failed and we were unable to recover it. 00:21:21.209 [2024-04-24 16:17:22.296935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.209 [2024-04-24 16:17:22.297155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.297181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.297337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.297464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.297489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.297652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.297875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.297904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.298094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.298300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.298328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.298528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.298705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.298734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.298930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.299314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.299653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.299890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.300073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.300260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.300289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.300563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.300863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.300892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.301068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.301256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.301282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.301469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.301625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.301654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.301803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.301977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.302007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.302195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.302455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.302528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.302710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.302901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.302930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.303109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.303406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.303794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.303976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.304138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.304299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.304326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.304543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.304876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.304905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.305079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.305293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.305347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.305676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.305911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.305954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.306087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.306276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.306320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.210 qpair failed and we were unable to recover it. 00:21:21.210 [2024-04-24 16:17:22.306512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.210 [2024-04-24 16:17:22.306686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.306726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.307021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.307277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.307338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.307656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.307848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.307878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.308019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.308185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.308215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.308399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.308605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.308631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.308868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.309168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.309232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.309566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.309885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.309941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.310150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.310308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.310444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.310643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.310671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.310874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.311046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.311075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.311277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.311543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.311601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.311798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.312257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.312664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.312888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.313045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.313289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.313316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.313525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.313699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.313729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.313901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.314307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.314675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.314877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.315030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.315163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.315190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.315373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.315647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.315677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.315844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.316021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.316110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.316374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.316606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.316635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.316809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.316981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.317010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.317187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.317366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.317392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.317568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.317779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.317820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.318015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.318239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.318268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.318468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.318663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.318692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.211 [2024-04-24 16:17:22.318902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.319060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.211 [2024-04-24 16:17:22.319087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.211 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.319316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.319537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.319586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.319839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.320238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.320714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.320954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.321102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.321306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.321335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.321625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.321874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.321905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.322105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.322372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.322429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.322619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.322803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.322830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.322990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.323172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.323214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.323552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.323795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.323824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.323998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.324175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.324204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.324405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.324583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.324612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.324826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.325216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.325596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.325829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.326009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.326253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.326305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.326488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.326650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.326679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.326866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.327213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.327584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.327815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.328002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.328133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.328160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.328347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.328671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.328728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.328992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.329366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.329739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.329962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.330217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.330391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.330420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.330586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.330753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.330781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.330975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.331112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.331143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.212 [2024-04-24 16:17:22.331280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.331455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.212 [2024-04-24 16:17:22.331484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.212 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.331683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.331882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.331912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.332092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.332283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.332310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.332494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.332695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.332724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.332898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.333308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.333692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.333896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.334082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.334247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.334276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.334447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.334651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.334680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.334861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.335233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.335712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.335878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.336043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.336207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.336236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.336454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.336651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.336680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.336853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.336998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.337028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.337207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.337369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.337395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.337546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.337701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.337751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.337964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.338339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.338713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.338956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.339114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.339293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.339322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.339499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.339650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.339677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.339888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.340225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.340281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.340484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.340665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.340694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.340867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.341259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.341615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.341845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.342003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.342342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.213 qpair failed and we were unable to recover it. 00:21:21.213 [2024-04-24 16:17:22.342705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.213 [2024-04-24 16:17:22.342945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.343147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.343383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.343447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.343671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.343860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.343887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.344023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.344174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.344199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.344378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.344575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.344604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.344786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.344987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.345017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.345221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.345394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.345423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.345635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.345771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.345799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.345988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.346247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.346302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.346500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.346678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.346707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.346892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.347234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.347618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.347899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.348057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.348213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.348241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.348428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.348609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.348635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.348838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.348991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.349020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.349205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.349386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.349413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.349619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.349818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.349848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.350016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.350234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.350288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.350460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.350636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.350666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.350855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.351211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.351608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.351812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.351988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.352139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.352168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.352489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.352697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.352722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.352916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.353271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.353711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.353906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.354071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.354245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.354271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.214 qpair failed and we were unable to recover it. 00:21:21.214 [2024-04-24 16:17:22.354484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.214 [2024-04-24 16:17:22.354689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.354718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.354912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.355139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.355168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.355343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.355627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.355684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.355919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.356375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.356782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.356962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.357168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.357416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.357484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.357661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.357875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.357905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.358080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.358433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.358800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.358993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.359168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.359386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.359415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.359572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.359772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.359801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.359987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.360336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.360390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.360566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.360766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.360796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.361106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.361403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.361428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.361648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.361849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.361879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.362070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.362328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.362388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.362655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.362818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.362845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.363031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.363209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.363238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.363415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.363625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.363665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.363870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.364071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.364097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.364369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.364597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.364650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.364806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.365010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.365039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.215 qpair failed and we were unable to recover it. 00:21:21.215 [2024-04-24 16:17:22.365186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.215 [2024-04-24 16:17:22.365348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.365390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.365543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.365724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.365765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.365943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.366416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.366789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.366975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.367132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.367420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.367472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.367650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.367848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.367874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.368128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.368296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.368337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.368535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.368728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.368762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.369007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.369267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.369297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.369483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.369643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.369671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.369833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.369970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.370011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.370236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.370409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.370438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.370616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.371265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.371619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.371850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.372020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.372172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.372202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.372455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.372652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.372693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.372876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.373254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.373610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.373799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.374005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.374253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.374316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.374520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.374681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.374723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.374945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.375248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.375305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.375477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.375659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.375689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.375867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.376240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.376727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.376947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.377094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.377277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.377306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.216 qpair failed and we were unable to recover it. 00:21:21.216 [2024-04-24 16:17:22.377478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.216 [2024-04-24 16:17:22.377637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.377679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.377885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.378330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.378703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.378919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.379096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.379358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.379398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.379545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.379756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.379786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.380029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.380226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.380255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.380442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.380648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.380674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.380843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.381189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.381644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.381879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.382047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.382210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.382239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.382431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.382681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.382710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.382970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.383396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.383717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.383936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.384137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.384316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.384345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.384522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.384693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.384722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.384881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.385086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.385112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.385301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.385585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.385641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.385816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.385996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.386025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.386200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.386403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.386429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.386646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.386849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.386876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.387057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.387206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.387236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.387452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.387629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.387658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.387830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.388223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.388693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.388878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.389054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.389402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.389457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.217 qpair failed and we were unable to recover it. 00:21:21.217 [2024-04-24 16:17:22.389634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.217 [2024-04-24 16:17:22.389808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.389838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.390055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.390341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.390399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.390599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.390804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.390870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.391049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.391294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.391348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.391545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.391696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.391723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.391881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.392294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.392704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.392897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.393033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.393350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.393692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.393914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.394096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.394481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.394800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.394981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.395126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.395323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.395352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.395548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.395680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.395706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.395885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.396160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.396209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.396463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.396672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.396701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.396857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.397211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.397556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.397773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.397952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.398326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.398686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.398853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.399013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.399373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.399754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.399953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.400110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.400314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.400343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.400526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.400684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.218 [2024-04-24 16:17:22.400711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.218 qpair failed and we were unable to recover it. 00:21:21.218 [2024-04-24 16:17:22.400855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.401243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.401587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.401762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.401924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.402271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.402588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.402780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.402952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.403333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.403702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.403912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.404098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.404480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.404819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.404983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.405165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.405394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.405451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.405638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.405800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.405827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.405978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.406408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.406753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.406913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.407066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.407474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.407791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.407999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.408317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.408523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.408551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.408732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.408918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.408947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.409117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.409304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.409330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.219 [2024-04-24 16:17:22.409533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.409716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.219 [2024-04-24 16:17:22.409764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.219 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.409933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.410304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.410653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.410867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.411042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.411410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.411779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.411941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.412100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.412227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.412253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.412414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.412573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.412599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.412785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.412983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.413012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.413183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.413371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.413439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.413622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.413759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.413786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.413984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.414302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.414657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.414874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.415038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.415428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.415782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.415996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.416179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.416388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.416416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.416585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.416791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.416817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.416963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.417367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.417769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.417989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.418151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.418312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.418338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.418508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.418652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.418681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.418871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.419227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.419585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.419770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.419926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.420127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.420155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.220 qpair failed and we were unable to recover it. 00:21:21.220 [2024-04-24 16:17:22.420301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.220 [2024-04-24 16:17:22.420471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.420499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.420657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.420799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.420825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.420989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.421425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.421807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.421963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.422104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.422308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.422333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.422522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.422682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.422708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.422937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.423332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.423688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.423877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.424051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.424404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.424746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.424956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.425100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.425279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.425313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.425472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.425629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.425654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.425808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.425994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.426032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.426215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.426379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.426421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.426581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.426731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.426782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.426942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.427324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.427731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.427936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.428111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.428270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.428297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.428425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.428583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.428625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.428836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.428985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.429011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.429198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.429339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.429368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.429505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.429655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.429683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.429859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.430195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.430554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.430780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.221 qpair failed and we were unable to recover it. 00:21:21.221 [2024-04-24 16:17:22.430908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.221 [2024-04-24 16:17:22.431083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.431111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.431348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.431497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.431524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.431706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.431873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.431902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.432050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.432245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.432274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.432449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.432660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.432687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.432830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.432973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.433010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.433146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.433323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.433351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.433498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.433714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.433865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.434266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.434615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.434797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.435030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.435419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.435789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.435974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.436127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.436273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.436301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.436412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbb860 is same with the state(5) to be set 00:21:21.222 [2024-04-24 16:17:22.436637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.436842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.436876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.437030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.437354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.437725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.437907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.438042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.438419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.438810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.438993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.439142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.439298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.439324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.439468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.439670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.439701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.439889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.440207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.440574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.440777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.440917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.441056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.441082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.222 qpair failed and we were unable to recover it. 00:21:21.222 [2024-04-24 16:17:22.441248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.441385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.222 [2024-04-24 16:17:22.441428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.441570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.441753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.441798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.441938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.442327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.442645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.442868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.443027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.443201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.443231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.443390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.443552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.443580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.443769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.443978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.444014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.444166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.444356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.444386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.444527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.444713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.444745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.444947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.445286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.445658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.445874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.446026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.446374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.446788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.446950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.447137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.447348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.447378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.447561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.447762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.447792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.447967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.448384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.448755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.448973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.449116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.449298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.449325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.449486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.449661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.449691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.449844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.450232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.223 qpair failed and we were unable to recover it. 00:21:21.223 [2024-04-24 16:17:22.450620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.223 [2024-04-24 16:17:22.450804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.450957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.451359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.451712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.451885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.452071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.452396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.452736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.452895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.453050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.453412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.453773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.453978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.454162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.454301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.454328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.454487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.454648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.454675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.454848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.455231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.455572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.455766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.455900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.456219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.456606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.456815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.456975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.457372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.457754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.457934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.458059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.458220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.458247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.224 [2024-04-24 16:17:22.458395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.458549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.224 [2024-04-24 16:17:22.458577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.224 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.458756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.458914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.458940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.459093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.459452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.459812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.459971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.460182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.460342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.460368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.460526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.460749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.460775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.460935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.461297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.461697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.461870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.462020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.462392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.462779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.462987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.463190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.463355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.463381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.463555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.463732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.463767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.463944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.464265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.464708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.464934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.465106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.465287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.465316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.465487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.465676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.465703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.465877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.466269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.466679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.466849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.466986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.467416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.467791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.467951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.468135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.468308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.468337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.468510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.468682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.468708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.225 qpair failed and we were unable to recover it. 00:21:21.225 [2024-04-24 16:17:22.468887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.225 [2024-04-24 16:17:22.469027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.469054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.469238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.469376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.469402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.469588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.469764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.469794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.469963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.470261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.470617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.470793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.470970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.471148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.471175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.226 [2024-04-24 16:17:22.471318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.471486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.226 [2024-04-24 16:17:22.471512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.226 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.471719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.471869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.471898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.472051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.472410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.472771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.472973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.473144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.473263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.473304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.473484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.473626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.473655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.473844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.473977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.502 [2024-04-24 16:17:22.474003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.502 qpair failed and we were unable to recover it. 00:21:21.502 [2024-04-24 16:17:22.474182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.474354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.474383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.474551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.474727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.474765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.474941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.475140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.475186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.475401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.475587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.475615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.475817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.475974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.476009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.476219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.476350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.476377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.476568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.476753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.476782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.476967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.477337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.477734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.477936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.478094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.478422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.478783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.478970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.479101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.479239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.479267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.479450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.479657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.479684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.479864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.480278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.480627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.480839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.480997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.481351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.481670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.481893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.482076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.482445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.482810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.482973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.483154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.483320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.483346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.483476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.483607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.483634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.483793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.483992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.484021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.484172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.484330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.484357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.484488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.484641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.484688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.484868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.485301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.485712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.485912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.486057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.486239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.486265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.486447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.486601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.486630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.486788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.486985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.487014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.487189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.487410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.487441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.487645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.487833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.487861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.488063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.488244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.488271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.488451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.488653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.488679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.488832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.489223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.489622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.489790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.503 [2024-04-24 16:17:22.489971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.490116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.503 [2024-04-24 16:17:22.490142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.503 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.490294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.490486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.490516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.490702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.490905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.490932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.491092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.491212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.491238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.491441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.491639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.491667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.491834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.491984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.492018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.492175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.492338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.492365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.492503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.492692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.492735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.492916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.493316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.493699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.493920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.494079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.494387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.494804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.494964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.495116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.495417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.495754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.495955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.496125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.496282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.496311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.496467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.496653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.496679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.496858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.497185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.497594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.497830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.497968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.498326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.498705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.498877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.499037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.499383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.499719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.499941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.500106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.500504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.500801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.500983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.501142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.501319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.501346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.501533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.501728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.501773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.501934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.502338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.502692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.502873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.503024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.503202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.503244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.503397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.503588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.503617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.503808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.503975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.504002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.504175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.504308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.504334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.504545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.504729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.504762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.504898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.505301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.505617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.505845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.505981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.506327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.506730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.506932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.504 qpair failed and we were unable to recover it. 00:21:21.504 [2024-04-24 16:17:22.507104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.504 [2024-04-24 16:17:22.507242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.507270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.507436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.507644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.507674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.507850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.508264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.508678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.508872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.509033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.509203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.509232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.509431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.509639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.509668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.509846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.510172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.510535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.510732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.510894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.511285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.511665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.511861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.512040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.512414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.512754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.512972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.513133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.513321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.513347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.513487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.513654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.513680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.513842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.514194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.514583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.514790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.514937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.515263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.515659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.515887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.516477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.516759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.516942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.517141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.517300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.517326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.517507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.517688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.517716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.517929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.518275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.518609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.518833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.519022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.519217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.519276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.519449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.519630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.519660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.519846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.520173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.520506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.520704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.505 qpair failed and we were unable to recover it. 00:21:21.505 [2024-04-24 16:17:22.520902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.505 [2024-04-24 16:17:22.521106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.521154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.521365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.521528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.521555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.521693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.521855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.521882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.522016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.522388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.522772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.522966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.523150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.523284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.523312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.523483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.523666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.523694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.523838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.524204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.524574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.524834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.524999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.525348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.525703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.525936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.526103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.526239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.526273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.526430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.526635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.526664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.526874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.527173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.527557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.527774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.527957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.528324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.528809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.528993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.529153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.529318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.529352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.529558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.529731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.529785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.529936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.530297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.530665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.530856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.531020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.531340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.531736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.531929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.532056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.532210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.532239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.532385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.532574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.532604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.532805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.532979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.533006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.533145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.533305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.533333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.533530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.533694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.533720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.533931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.534260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.534658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.534871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.535011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.535355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.535750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.535941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.536121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.536300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.536328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.536475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.536677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.536707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.536892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.537270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.506 [2024-04-24 16:17:22.537614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.506 [2024-04-24 16:17:22.537778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.506 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.537961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.538297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.538646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.538881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.539046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.539257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.539283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.539428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.539624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.539655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.539870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.540203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.540620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.540809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.540971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.541325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.541625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.541816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.542001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.542370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.542709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.542930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.543116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.543301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.543327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.543505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.543685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.543712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.543873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.544012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.544039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.544226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.545321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.545672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.545877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.546057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.546258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.546289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.546462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.546648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.546687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.546856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.547159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.547481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.547708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.547899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.548226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.548647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.548865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.549033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.549239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.549268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.549457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.549621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.549651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.549831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.550211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.550610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.550820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.551008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.551369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.551750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.551919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.552072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.552210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.552237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.552424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.552595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.552625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.552832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.553246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.553639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.553846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.507 qpair failed and we were unable to recover it. 00:21:21.507 [2024-04-24 16:17:22.554050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.554174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.507 [2024-04-24 16:17:22.554202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.554411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.554617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.554645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.554822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.555219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.555583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.555801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.556000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.556404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.556765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.556967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.557124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.557282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.557309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.557491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.557630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.557657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.557847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.558252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.558685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.558871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.559042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.559237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.559280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.559461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.559659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.559688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.559896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.560222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.560553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.560783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.560961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.561181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.561210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.561382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.561586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.561615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.561833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.561987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.562028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.562236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.562395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.562438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.562606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.562777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.562807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.562975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.563300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.563623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.563852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.564031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.564374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.564756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.564986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.565130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.565500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.565842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.565997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.566199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.566385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.566417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.566615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.566815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.566845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.567043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.567419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.567808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.567989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.568167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.568366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.568392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.568595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.568768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.568799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.568971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.569146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.569217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.569397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.569611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.569636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.569812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.570205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.570704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.570931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.571128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.571295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.571341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.571546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.571729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.571767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.571978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.572174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.572201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.572401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.572619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.572645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.508 qpair failed and we were unable to recover it. 00:21:21.508 [2024-04-24 16:17:22.572836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.508 [2024-04-24 16:17:22.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.573021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.573220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.573399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.573426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.573593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.573782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.573812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.573973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.574383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.574793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.574996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.575205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.575325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.575369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.575544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.575752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.575779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.575958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.576316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.576685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.576885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.577069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.577253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.577300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.577504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.577708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.577736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.577930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.578294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.578689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.578867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.579071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.579219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.579247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.579413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.579569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.579597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.579987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.580015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.580168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.580316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.580344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.580546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.580752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.580786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.580940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.581265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.581691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.581880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.582043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.582267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.582326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.582532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.582691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.582721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.582891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.583286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.583625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.583856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.584063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.584286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.584333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.584538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.584737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.584779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.585008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.585338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.585657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.585818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.586042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.586358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.586725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.586943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.587075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.587235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.587260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.587437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.587618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.587646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.587848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.588248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.588640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.588845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.589032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.589375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.589735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.589962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.590112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.590262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.590290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.590443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.590628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.590657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.590827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.591246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.591612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.591846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.592046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.592243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.592271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.592436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.592570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.592595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.592784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.592982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.593011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.593188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.593342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.593374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.593571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.593772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.593812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.509 qpair failed and we were unable to recover it. 00:21:21.509 [2024-04-24 16:17:22.594014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.594136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.509 [2024-04-24 16:17:22.594164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.594375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.594515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.594544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.594725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.594920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.594946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.595127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.595378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.595432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.595619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.595786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.595829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.596034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.596185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.596215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.596432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.596603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.596633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.596828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.596972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.597009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.597225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.597431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.597483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.597691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.597894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.597923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.598106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.598312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.598342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.598524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.598691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.598717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.598893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.599270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.599624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.599815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.599949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.600343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.600746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.600955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.601135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.601307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.601335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.601489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.601683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.601712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.601902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.602325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.602724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.602974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.603149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.603323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.603353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.603523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.603697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.603724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.603873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.604258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.604652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.604857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.605010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.605271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.605328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.605525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.605665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.605693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.605853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.606623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.606836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.607035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.607408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.607758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.607930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.608160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.608321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.608364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.608560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.608760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.608793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.608952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.609162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.609214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.609387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.609567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.609594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.609800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.609992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.610019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.610217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.610402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.610432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.610632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.610791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.610821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.611023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.611243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.611290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.611495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.611682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.611711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.611924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.612252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.612619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.612832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.613004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.613382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.613808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.613964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.614090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.614276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.614306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.614507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.614714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.614747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.510 [2024-04-24 16:17:22.614925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.615056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.510 [2024-04-24 16:17:22.615101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.510 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.615280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.615465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.615491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.615642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.615784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.615820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.616007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.616216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.616245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.616428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.616607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.616646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.616828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.617025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.617071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.617229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.617391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.617418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.617627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.617952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.618015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.618196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.618391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.618418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.618600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.618771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.618809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.618980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.619189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.619219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.619422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.619581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.619610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.619797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.619975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.620010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.620215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.620418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.620448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.620624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.620825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.620852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.621010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.621326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.621726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.621940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.622124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.622300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.622329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.622502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.622700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.622729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.622898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.623269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.623632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.623832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.624043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.624348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.624773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.624944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.625093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.625220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.625248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.625430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.625599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.625628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.625824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.626237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.626622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.626798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.627001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.627783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.627981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.628167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.628352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.628384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.628609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.628796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.628823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.629006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.629162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.629203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.629379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.629562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.629606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.629822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.629985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.630016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.630173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.630331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.630358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.630545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.630720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.630756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.630959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.631321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.631662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.631904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.511 [2024-04-24 16:17:22.632081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.632305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.511 [2024-04-24 16:17:22.632356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.511 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.632534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.632706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.632736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.632936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.633709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.633938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.634145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.634328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.634355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.634519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.634642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.634667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.634828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.634983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.635035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.635212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.635409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.635455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.635631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.635813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.635854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.636019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.636175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.636201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.636347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.636546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.636574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.636773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.636962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.637002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.637176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.637308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.637334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.637537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.637736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.637772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.637928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.638342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.638685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.638915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.639120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.639313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.639366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.639545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.639759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.639794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.639963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.640201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.640252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.640412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.640580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.640608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.640810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.640980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.641009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.641179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.641324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.641352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.641510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.641705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.641733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.641940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.642304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.642677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.642888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.643077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.643341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.643404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.643589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.643768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.643800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.643972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.644355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.644730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.644931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.645065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.645447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.645795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.645986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.646168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.646394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.646421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.646571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.646759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.646786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.646923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.647342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.647712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.647916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.648074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.648232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.648259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.648450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.648621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.648650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.648823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.649304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.649680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.649860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.650019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.650378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.650738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.650909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.651058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.651254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.651314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.651527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.651666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.651692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.651877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.652403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.652726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.652940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.653256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.653287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.512 [2024-04-24 16:17:22.653493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.653620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.512 [2024-04-24 16:17:22.653648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.512 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.653790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.654156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.654546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.654768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.654953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.655180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.655237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.655429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.655601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.655630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.655831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.655991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.656166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.656491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.656826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.656993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.657129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.657318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.657347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.657511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.657664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.657690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.657847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.658189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.658557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.658715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.658879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.659258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.659662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.659877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.660052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.660417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.660733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.660921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.661112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.661266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.661312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.661483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.661653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.661683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.661862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.662267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.662579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.662728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.662895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.663241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.663661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.663880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.664026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.664451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.664764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.664954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.665089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.665486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.665766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.665960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.666144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.666305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.666331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.666515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.666667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.666695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.666858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.666999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.667026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.667180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.667346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.667371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.667523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.667686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.667713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.667932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.668302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.668620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.668801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.668962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.669303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.669683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.669892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.670027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.670403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.513 qpair failed and we were unable to recover it. 00:21:21.513 [2024-04-24 16:17:22.670772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.513 [2024-04-24 16:17:22.670936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.671145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.671319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.671348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.671551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.671759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.671802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.671942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.672128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.672154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.672435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.672654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.672682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.672850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.673212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.673569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.673733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.673899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.674244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.674638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.674859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.674990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.675426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.675757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.675954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.676097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.676462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.676760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.676948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.677089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.677264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.677311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.677466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.677623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.677649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.677878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.678208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.678503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.678710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.678882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.679201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.679609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.679788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.679948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.680346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.680701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.680886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.681062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.681398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.681790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.681975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.682155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.682383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.682435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.682618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.682755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.682780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.682912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.683349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.683735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.683924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.684067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.684222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.684249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.684399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.684571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.684600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.684800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.684986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.685016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.685148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.685297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.685339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.685557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.685724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.685756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.685922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.686306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.686614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.686834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.686967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.687307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.687632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.687842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.687992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.688349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.688733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.688921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.689084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.689264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.514 [2024-04-24 16:17:22.689293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.514 qpair failed and we were unable to recover it. 00:21:21.514 [2024-04-24 16:17:22.689495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.689623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.689665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.689835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.690250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.690632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.690804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.690963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.691418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.691787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.691951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.692107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.692328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.692355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.692517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.692703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.692732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.692908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.693326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.693726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.693935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.694086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.694217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.694258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.694441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.694600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.694642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.694827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.695207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.695515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.695677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.695872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.696271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.696666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.696814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.696973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.697334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.697680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.697886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.698085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.698478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.698792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.698970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.699141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.699326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.699355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.699534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.699714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.699750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.699969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.700275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.700706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.700920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.701099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.701325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.701352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.701505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.701700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.701730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.701950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.702341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.702731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.702939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.703092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.703247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.703273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.703480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.703697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.703724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.703906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.704253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.704647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.704855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.705025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.705211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.705238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.705429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.705609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.705638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.705838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.705998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.706040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.706221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.706404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.706430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.706591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.706800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.706870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.707044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.707213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.707242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.707438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.707604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.707633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.707845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.708276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.708766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.708971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.709147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.709308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.709334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.709492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.709646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.709676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.709836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.710209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.710554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.515 [2024-04-24 16:17:22.710796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.515 qpair failed and we were unable to recover it. 00:21:21.515 [2024-04-24 16:17:22.710984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.711389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.711753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.711985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.712199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.712376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.712405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.712610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.712734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.712783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.712970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.713333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.713710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.713942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.714159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.714321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.714365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.714754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.714793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.714977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.715181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.715210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.715394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.715572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.715614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.715822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.715985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.716019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.716199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.716480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.716547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.716722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.716908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.716937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.717125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.717340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.717392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.717559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.717716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.717767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.717922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.718286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.718649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.718844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.719038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.719233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.719260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.719436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.719653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.719679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.719868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.720262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.720646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.720864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.721041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.721202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.721247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.721442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.721668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.721697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.721889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.722277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.722677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.722898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.723053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.723229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.723255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.723415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.723619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.723648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.723859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.724288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.724644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.724839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.725021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.725253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.725305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.725481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.725679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.725706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.725876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.726273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.726772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.726975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.727167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.727320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.727347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.727470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.727629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.727655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.727819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.727991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.728028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.728186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.728326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.728353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.516 [2024-04-24 16:17:22.728561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.728728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.516 [2024-04-24 16:17:22.728764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.516 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.728925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.729149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.729175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.729358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.729569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.729629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.729840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.729981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.730006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.730240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.730436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.730464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.730625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.730758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.730796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.731009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.731184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.731213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.731411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.731582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.731611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.731806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.731987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.732024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.732222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.732508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.732568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.732798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.732964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.732989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.733153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.733350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.733378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.733577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.733753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.733791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.733991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.734366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.734683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.734924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.735100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.735301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.735362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.735540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.735723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.735755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.735978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.736163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.736192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.736364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.736590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.736643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.736846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.737234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.737622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.737812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.737947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.738320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.738692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.738906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.739066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.739437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.739803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.739983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.740167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.740340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.740369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.740514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.740702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.740730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.740898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.741209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.741268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.741479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.741637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.741673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.741847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.742254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.742667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.742907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.743053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.743319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.743374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.743572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.743783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.743813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.743983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.744344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.744728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.744915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.745084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.745243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.745289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.745488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.745683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.745712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.745894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.746359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.746733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.746929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.747085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.747244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.747275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.747478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.747656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.747685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.747836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.748248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.748631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.748853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.749031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.749208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.749234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.749418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.749616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.749643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.749804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.749969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.517 [2024-04-24 16:17:22.750013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.517 qpair failed and we were unable to recover it. 00:21:21.517 [2024-04-24 16:17:22.750168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.750341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.750372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.750532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.750777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.750961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.751314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.751690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.751853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.751987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.752350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.752729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.752937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.753124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.753285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.753312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.753500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.753672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.753702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.753858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.754241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.754629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.754858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.755030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.755236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.755265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.755437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.755641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.755670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.755839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.755999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.756042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.756255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.756552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.756619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.756832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.757012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.757039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.757378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.757642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.757683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.757851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.758258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.758682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.758943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.759115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.759314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.759373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.759573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.759753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.759780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.759986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.760351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.760716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.760910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.761130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.761285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.761327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.761505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.761712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.761760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.761925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.762317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.762648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.762876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.763051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.763389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.763757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.763948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.764130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.764290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.764317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.764541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.764751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.764795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.764937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.765310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.765680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.765917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.766084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.766352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.766562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.766733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.766787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.766992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.767395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.767736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.767955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.768149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.768303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.768345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.768515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.768664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.768693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.768886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.769268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.769666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.769902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.518 qpair failed and we were unable to recover it. 00:21:21.518 [2024-04-24 16:17:22.770086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.518 [2024-04-24 16:17:22.770293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.770322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.519 qpair failed and we were unable to recover it. 00:21:21.519 [2024-04-24 16:17:22.770489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.770632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.770663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.519 qpair failed and we were unable to recover it. 00:21:21.519 [2024-04-24 16:17:22.770862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.519 qpair failed and we were unable to recover it. 00:21:21.519 [2024-04-24 16:17:22.771239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.519 qpair failed and we were unable to recover it. 00:21:21.519 [2024-04-24 16:17:22.771589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.519 [2024-04-24 16:17:22.771896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.519 qpair failed and we were unable to recover it. 00:21:21.519 [2024-04-24 16:17:22.772081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.772242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.772271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.772463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.772642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.772673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.772873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.773310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.773708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.773919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.774106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.774270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.774314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.774482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.774677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.774705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.774923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.775051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.775095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.775269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.775439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.775468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.793 qpair failed and we were unable to recover it. 00:21:21.793 [2024-04-24 16:17:22.775637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.793 [2024-04-24 16:17:22.775816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.775846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.776051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.776267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.776324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.776534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.776710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.776750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.776958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.777343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.777721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.777941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.778116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.778311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.778340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.778508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.778708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.778737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.778952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.779300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.779712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.779887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.794 qpair failed and we were unable to recover it. 00:21:21.794 [2024-04-24 16:17:22.780046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.794 [2024-04-24 16:17:22.780226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.780255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.780430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.780626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.780655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.780809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.780940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.780967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.781185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.781423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.781451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.781647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.781825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.781852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.782011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.782322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.782719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.782909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.783087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.783224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.783251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.783454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.783630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.783659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.783856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.784016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.784042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.795 qpair failed and we were unable to recover it. 00:21:21.795 [2024-04-24 16:17:22.784230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.784409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.795 [2024-04-24 16:17:22.784451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.784649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.784837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.784863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.784999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.785443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.785818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.785991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.786150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.786350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.786379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.786578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.786707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.786757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.786951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.787331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.787697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.787881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.788039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.788249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.788278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.788451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.788639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.788665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.796 qpair failed and we were unable to recover it. 00:21:21.796 [2024-04-24 16:17:22.788819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.788985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.796 [2024-04-24 16:17:22.789019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.789224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.789386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.789428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.789600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.789784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.789823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.790025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.790405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.790761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.790971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.791130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.791288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.791314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.791468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.791671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.791857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.792246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.792591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.792824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.797 qpair failed and we were unable to recover it. 00:21:21.797 [2024-04-24 16:17:22.792967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.793175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.797 [2024-04-24 16:17:22.793202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.793358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.793560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.793589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.793764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.793950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.793992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.794166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.794356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.794419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.794622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.794813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.794843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.795057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.795197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.795239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.795437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.795609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.795639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.795823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.796255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.796623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.796856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.797046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.797181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.797207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.797360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.797510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.797539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.798 qpair failed and we were unable to recover it. 00:21:21.798 [2024-04-24 16:17:22.797714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.798 [2024-04-24 16:17:22.797924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.797953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.798133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.798304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.798333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.798533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.798717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.798749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.798917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.799277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.799729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.799985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.800147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.800270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.800296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.800445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.800649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.800677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.800829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.801014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.801041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.801250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.801536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.801591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.801794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.801956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.802000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.802172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.802341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.802370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.799 qpair failed and we were unable to recover it. 00:21:21.799 [2024-04-24 16:17:22.802544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.799 [2024-04-24 16:17:22.802718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.802755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.802943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.803105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.803131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.803311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.803566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.803623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.803827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.804226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.804812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.804939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.805381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.805802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.805980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.806155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.806331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.806359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.806672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.806699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.800 [2024-04-24 16:17:22.806908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.807090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.800 [2024-04-24 16:17:22.807117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.800 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.807329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.807509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.807538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.807721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.807888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.807933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.808136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.808296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.808338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.808503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.808706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.808736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.808955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.809211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.809272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.809480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.809628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.809657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.809829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.809995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.810025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.810206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.810388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.810414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.810545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.810706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.810732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.810917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.811300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.811702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.811932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.812093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.812298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.812327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.801 [2024-04-24 16:17:22.812530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.812650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.801 [2024-04-24 16:17:22.812694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.801 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.812887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.813236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.813662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.813880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.814032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.814260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.814312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.814499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.814639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.814666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.814854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.815342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.815755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.815931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.816083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.816213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.816240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.816413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.816617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.816643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.816833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.817283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.817688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.817900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.818105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.818297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.818324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.818485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.818642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.818669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.818832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.818991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.819035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.819163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.819296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.819325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.819522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.819678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.819705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.819874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.820230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.820591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.820795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.820928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.821185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.821237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.821439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.821605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.821634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.821819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.821992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.822022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.802 [2024-04-24 16:17:22.822164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.822301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.802 [2024-04-24 16:17:22.822330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.802 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.822506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.822707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.822736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.822922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.823271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.823718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.823933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.824117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.824276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.824318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.824527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.824651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.824678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.824863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.825296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.825718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.825936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.826135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.826308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.826337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.826544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.826687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.826716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.826903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.827333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.827692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.827905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.803 [2024-04-24 16:17:22.828068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.828301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.803 [2024-04-24 16:17:22.828355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.803 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.828509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.828681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.828710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.828920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.829336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.829681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.829854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.830056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.830432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.830824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.830988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.831159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.831293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.831334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.831515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.831713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.831751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.831924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.832288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.832636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.832883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.833066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.833266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.833295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.833496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.833675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.833706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.833907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.834392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.834729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.834899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.835102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.835296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.835325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.835497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.835693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.835720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.835890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.836243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.836654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.836875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.837043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.837228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.837271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.837470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.837646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.837676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.837849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.838071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.804 [2024-04-24 16:17:22.838140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.804 qpair failed and we were unable to recover it. 00:21:21.804 [2024-04-24 16:17:22.838321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.838570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.838629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.838809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.838982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.839012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.839183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.839347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.839376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.839532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.839660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.839687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.839867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.840235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.840564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.840760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.840941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.841290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.841728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.841940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.842069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.842222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.842249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.842449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.842620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.842650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.842802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.842996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.843040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.843228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.843382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.843409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.843565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.843691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.843718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.843880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.844233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.844651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.844872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.845034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.845422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.845764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.845974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.846132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.846304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.846334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.846512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.846726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.846761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.846917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.847094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.847123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.847307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.847434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.847460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.805 qpair failed and we were unable to recover it. 00:21:21.805 [2024-04-24 16:17:22.847646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.805 [2024-04-24 16:17:22.847859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.847886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.848067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.848232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.848261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.848436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.848563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.848591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.848795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.848985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.849012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.849195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.849375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.849402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.849587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.849711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.849737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.849904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.850248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.850724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.850917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.851082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.851258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.851287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.851467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.851651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.851677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.851834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.851975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.852001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.852158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.852349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.852378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.852566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.852761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.852791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.852995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.853199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.853269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.853476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.853673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.853702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.853879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.854298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.854617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.854849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.855006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.855356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.855723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.855938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.856119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.856250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.856276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.856441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.856648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.856678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.856829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.856972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.857001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.806 qpair failed and we were unable to recover it. 00:21:21.806 [2024-04-24 16:17:22.857196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.806 [2024-04-24 16:17:22.857421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.857476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.857681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.857844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.857872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.858058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.858271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.858322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.858507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.858687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.858729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.858937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.859216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.859266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.859482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.859638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.859665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.859850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.860281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.860770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.860989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.861169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.861344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.861373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.861550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.861753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.861784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.861981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.862417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.862786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.862994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.863216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.863377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.863404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.863560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.863766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.863796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.863958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.864148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.864175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.864382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.864623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.864821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.864999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.865033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.865187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.865344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.865371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.865580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.865754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.865784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.865939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.866155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.866210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.866416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.866614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.866643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.866844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.867226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.807 qpair failed and we were unable to recover it. 00:21:21.807 [2024-04-24 16:17:22.867602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.807 [2024-04-24 16:17:22.867803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.867979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.868376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.868769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.868968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.869113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.869252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.869281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.869451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.869655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.869681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.869866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.870265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.870615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.870809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.871024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.871368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.871788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.871993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.872124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.872277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.872319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.872492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.872700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.872727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.872920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.873294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.808 qpair failed and we were unable to recover it. 00:21:21.808 [2024-04-24 16:17:22.873635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.808 [2024-04-24 16:17:22.873856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.874025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.874307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.874366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.874565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.874755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.874786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.874989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.875243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.875273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.875470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.875611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.875641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.875824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.875988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.876031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.876206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.876358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.876389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.876589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.876769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.876797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.876959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.877384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.877786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.877991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.878153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.878311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.878338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.878490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.878693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.878720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.878905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.879318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.879638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.879854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.880016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.880365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.880725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.880933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.881076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.881260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.881286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.881509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.881644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.881688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.881895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.882329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.882780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.882990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.883192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.883461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.883517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.883714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.883897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.809 [2024-04-24 16:17:22.883927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.809 qpair failed and we were unable to recover it. 00:21:21.809 [2024-04-24 16:17:22.884103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.884267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.884293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.884451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.884617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.884644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.884808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.885262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.885670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.885897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.886059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.886213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.886240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.886412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.886598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.886625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.886827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.886977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.887007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.887149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.887351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.887378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.887532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.887665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.887708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.887923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.888244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.888732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.888943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.889107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.889301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.889329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.889527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.889682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.889712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.889905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.890276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.890657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.890876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.891006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.891399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.891781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.891965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.892148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.892312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.892339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.892515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.892714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.892757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.892963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.893138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.893168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.810 [2024-04-24 16:17:22.893349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.893485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.810 [2024-04-24 16:17:22.893512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.810 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.893707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.893855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.893886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.894036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.894375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.894786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.894968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.895139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.895296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.895323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.895453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.895613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.895655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.895827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.896180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.896553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.896768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.896968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.897257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.897571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.897767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.897924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.898242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.898588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.898781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.898922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.899272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.899689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.899866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.900039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.900377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.900715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.900918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.901088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.901470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.901989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.811 qpair failed and we were unable to recover it. 00:21:21.811 [2024-04-24 16:17:22.902136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.902298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.811 [2024-04-24 16:17:22.902327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.902501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.902687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.902714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.902885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.903230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.903547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.903731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.903911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.904221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.904552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.904788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.904971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.905303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.905713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.905953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.906136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.906293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.906320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.906478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.906694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.906724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.906907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.907239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.907678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.907871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.908059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.908182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.908226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.908406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.908568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.908612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.908819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.908974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.909176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.909494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.909855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.909982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.910008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.910219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.910373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.910400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.910529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.910686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.910713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.910888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.911043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.911070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.911191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.911395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.812 [2024-04-24 16:17:22.911425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.812 qpair failed and we were unable to recover it. 00:21:21.812 [2024-04-24 16:17:22.911573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.911755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.911783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.911936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.912248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.912614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.912805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.912987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.913399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.913753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.913913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.914071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.914433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.914795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.914974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.915152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.915317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.915346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.915526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.915703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.915733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.915904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.916264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.916651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.916867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.917045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.917431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.917796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.917984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.918142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.918303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.918329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.918484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.918669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.918700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.918869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.919239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.919622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.919826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.919968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.920164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.920193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.920355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.920538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.920581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.813 qpair failed and we were unable to recover it. 00:21:21.813 [2024-04-24 16:17:22.920756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.813 [2024-04-24 16:17:22.920931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.920960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.921134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.921314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.921340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.921528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.921731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.921767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.921920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.922299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.922695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.922860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.923057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.923397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.923769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.923966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.924168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.924344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.924373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.924546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.924717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.924754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.924939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.925258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.925662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.925906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.926041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.926388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.926703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.926858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.927042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.927221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.927250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.814 [2024-04-24 16:17:22.927427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.927586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.814 [2024-04-24 16:17:22.927623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.814 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.927805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.927953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.927982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.928179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.928471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.928813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.928987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.929149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.929330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.929356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.929531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.929707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.929735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.929899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.930277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.930662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.930869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.931056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.931379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.931738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.931954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.932131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.932372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.932422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.932576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.932737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.932771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.932910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.933262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.933613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.933856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.934046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.934322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.934653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.934839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.934999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.935389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.935768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.935924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.815 qpair failed and we were unable to recover it. 00:21:21.815 [2024-04-24 16:17:22.936139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.815 [2024-04-24 16:17:22.936309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.936337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.936510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.936666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.936691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.936849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.936999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.937026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.937166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.937324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.937350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.937519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.937709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.937735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.937903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.938269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.938656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.938871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.939001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.939399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.939782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.939983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.940160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.940333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.940359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.940697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.940724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.940900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.941272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.941656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.941817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.941978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.942308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.942705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.942924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.943056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.943468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.943814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.943969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.816 qpair failed and we were unable to recover it. 00:21:21.816 [2024-04-24 16:17:22.944159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.944338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.816 [2024-04-24 16:17:22.944366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.944553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.944779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.944805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.944954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.945353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.945731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.945916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.946124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.946248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.946274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.946458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.946631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.946661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.946866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.947206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.947613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.947780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.947942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.948318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.948692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.948912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.949100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.949283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.949309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.949469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.949651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.949694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.949911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.950255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.950636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.950823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.950978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.951156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.951186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.951381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.951517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.951543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.817 qpair failed and we were unable to recover it. 00:21:21.817 [2024-04-24 16:17:22.951699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.817 [2024-04-24 16:17:22.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.951872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.952030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.952222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.952250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.952410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.952585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.952619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.952831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.952992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.953175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.953479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.953819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.953977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.954003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.954190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.954339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.954367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.954553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.954763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.954793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.954943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.955318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.955723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.955915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.956053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.956209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.956235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.956400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.956580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.956610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.818 qpair failed and we were unable to recover it. 00:21:21.818 [2024-04-24 16:17:22.956785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.818 [2024-04-24 16:17:22.956988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.957018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.957219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.957434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.957490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.957679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.957872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.957900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.958037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.958400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.958772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.958973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.959125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.959310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.959337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.959498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.959636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.959663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.959828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.959986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.960016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.960172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.960351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.960377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.960553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.960719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.960756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.960932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.961091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.961117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.961250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.961386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.961413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.819 qpair failed and we were unable to recover it. 00:21:21.819 [2024-04-24 16:17:22.961628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.819 [2024-04-24 16:17:22.961799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.961830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.962024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.962465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.962785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.962951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.963139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.963313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.963339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.963505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.963683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.963711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.963894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.964294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.964651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.964863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.965025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.965347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.965716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.820 [2024-04-24 16:17:22.965900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.820 qpair failed and we were unable to recover it. 00:21:21.820 [2024-04-24 16:17:22.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.966414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.966826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.966993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.967145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.967295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.967325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.967505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.967637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.967664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.967856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.968037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.821 [2024-04-24 16:17:22.968066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.821 qpair failed and we were unable to recover it. 00:21:21.821 [2024-04-24 16:17:22.968216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.968415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.968444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.968606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.968774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.968804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.968992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.969361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.969731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.969969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.970148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.970275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.970302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.970463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.970667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.970697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.970850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.971262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.971608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.971817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.972024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.972199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.972229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.822 [2024-04-24 16:17:22.972379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.972543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.822 [2024-04-24 16:17:22.972570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.822 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.972760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.972928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.972955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.973087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.973431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.973792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.973996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.974164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.974330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.974359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.974554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.974689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.974726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.974878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.975297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.975578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.975736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.975897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.976098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.976158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.976323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.976459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.976489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.823 qpair failed and we were unable to recover it. 00:21:21.823 [2024-04-24 16:17:22.976624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.823 [2024-04-24 16:17:22.976806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.976848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.977050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.977326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.977379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.977559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.977752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.977779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.977945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.978336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.978710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.978910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.979096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.979283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.979310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.979493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.979652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.979694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.979887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.980281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.980575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.980759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.980949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.981126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.824 [2024-04-24 16:17:22.981155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.824 qpair failed and we were unable to recover it. 00:21:21.824 [2024-04-24 16:17:22.981365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.981528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.981554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.981775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.981966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.981992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.982149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.982469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.982783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.982973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.983123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.983298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.983327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.983511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.983685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.983715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.983931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.984231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.984657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.984881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.985050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.985206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.985232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.985362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.985519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.825 [2024-04-24 16:17:22.985545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.825 qpair failed and we were unable to recover it. 00:21:21.825 [2024-04-24 16:17:22.985738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.985894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.985920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.986083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.986364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.986688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.986844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.987004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.987348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.987720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.987934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.988113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.988426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.988764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.988935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.989107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.989431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.826 qpair failed and we were unable to recover it. 00:21:21.826 [2024-04-24 16:17:22.989805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.826 [2024-04-24 16:17:22.989987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.990125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.990261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.990288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.990445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.990575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.990602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.990807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.990988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.991017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.991190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.991371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.991400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.991555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.991703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.991732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.991926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.992298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.992607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.992803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.992980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.993297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.993657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.993824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.994001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.994154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.994180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.827 [2024-04-24 16:17:22.994374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.994557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.827 [2024-04-24 16:17:22.994580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.827 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.994760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.994925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.994949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.995084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.995378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.995745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.995929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.996043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.996378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.996720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.996934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.997081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.997441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.997798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.997981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.998126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.998450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.998822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.998984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.999138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.999320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.999344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.999528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.999724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:22.999770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:22.999940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:23.000343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.828 [2024-04-24 16:17:23.000641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.828 [2024-04-24 16:17:23.000834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.828 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.000989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.001180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.001209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.001406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.001610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.001638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.001847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.002201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.002576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.002749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.002904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.003230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.003614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.003826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.003954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.004355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.004795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.004983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.005139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.005322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.005365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.005510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.005688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.005717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.005888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.006239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.006661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.006854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.007035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.007387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.007765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.007967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.008162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.008350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.008380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.008573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.008728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.008797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.829 qpair failed and we were unable to recover it. 00:21:21.829 [2024-04-24 16:17:23.008973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.829 [2024-04-24 16:17:23.009166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.009195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.009375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.009510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.009536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.009694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.009885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.009915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.010083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.010242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.010269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.010428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.010601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.010630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.010789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.010974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.011000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.011174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.011361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.011390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.011530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.011711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.011740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.011903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.012245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.012652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.012841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.012987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.013338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.013683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.013889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.014076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.014278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.014348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.014521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.014718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.014756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.830 qpair failed and we were unable to recover it. 00:21:21.830 [2024-04-24 16:17:23.014927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.015068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.830 [2024-04-24 16:17:23.015111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.015253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.015438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.015464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.015637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.015836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.015866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.016044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.016381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.016757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.016975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.017154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.017354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.017383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.017601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.017766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.017793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.017956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.018369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.018753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.018978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.019150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.019300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.019328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.019529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.019661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.019686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.019837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.020200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.020554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.020762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.020962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.021336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.021726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.021901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.022110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.022269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.022312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.022522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.022694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.022723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.022910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.023295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.831 qpair failed and we were unable to recover it. 00:21:21.831 [2024-04-24 16:17:23.023665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.831 [2024-04-24 16:17:23.023859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.024020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.024305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.024736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.024954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.025118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.025323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.025352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.025524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.025723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.025758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.025927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.026272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.026668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.026861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.027031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.027264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.027317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.027495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.027633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.027665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.027802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.027987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.028016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.832 qpair failed and we were unable to recover it. 00:21:21.832 [2024-04-24 16:17:23.028187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.832 [2024-04-24 16:17:23.028428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.028480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.833 qpair failed and we were unable to recover it. 00:21:21.833 [2024-04-24 16:17:23.028647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.028809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.028854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.833 qpair failed and we were unable to recover it. 00:21:21.833 [2024-04-24 16:17:23.029010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.833 qpair failed and we were unable to recover it. 00:21:21.833 [2024-04-24 16:17:23.029350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.833 qpair failed and we were unable to recover it. 00:21:21.833 [2024-04-24 16:17:23.029696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.029923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.833 qpair failed and we were unable to recover it. 00:21:21.833 [2024-04-24 16:17:23.030120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.833 [2024-04-24 16:17:23.030291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.030321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.030494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.030637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.030666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.030835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.030969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.031023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.031224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.031530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.031584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.031791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.031927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.031953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.032116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.032281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.032308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.032483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.032649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.032677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.032820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.033226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.033521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.033699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.033870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.034305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.034714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.034906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.035066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.035405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.035723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.035975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.036101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.036249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.036276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.036406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.036614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.834 [2024-04-24 16:17:23.036642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.834 qpair failed and we were unable to recover it. 00:21:21.834 [2024-04-24 16:17:23.036812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.036968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.036997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.037165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.037342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.037368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.037551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.037726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.037762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.037936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.038278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.038612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.038813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.039016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.039373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.039714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.039929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.040061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.040383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.040778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.040938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.041096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.041270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.041298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.041506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.041657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.041701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.041909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.042226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.042584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.042791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.042925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.043315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.043638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.043842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.044038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.044223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.044249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.044409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.044562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.044590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.044810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.045210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.045581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.045782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.045937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.046275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.835 [2024-04-24 16:17:23.046620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.835 [2024-04-24 16:17:23.046830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.835 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.047028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.047349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.047757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.047958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.048141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.048314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.048343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.048554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.048695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.048721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.048890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.049268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.049730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.049925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.050108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.050265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.050291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.050478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.050613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.050639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.050861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.050993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.051021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.051159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.051352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.051379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.051547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.051732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.051768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.051945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.052324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.052687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.052947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.053086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.053435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.053810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.053972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.054148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.054337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.054367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.054520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.054691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.054719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.054898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.055321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.055858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.056018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.056377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.056720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.056908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.057087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.057250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.057280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.057520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.057690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.057720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.057890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.058182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.058611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.058797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.058962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.059367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.059750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.059901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.060083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.060256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.060287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.060442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.060619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.060648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.060834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.060974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.061002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.061158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.061306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.061334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.061506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.061678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.061706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.061901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.062250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.062608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.062833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.836 qpair failed and we were unable to recover it. 00:21:21.836 [2024-04-24 16:17:23.063002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.836 [2024-04-24 16:17:23.063180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.837 [2024-04-24 16:17:23.063208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:21.837 qpair failed and we were unable to recover it. 00:21:21.837 [2024-04-24 16:17:23.063374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.063572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.063602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.063807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.063944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.063971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.064116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.064282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.064307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.064468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.064658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.064685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.064844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.064999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.065039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.065264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.065594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.065643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.065812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.065950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.065975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.066179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.066329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.066358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.066569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.066728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.066763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.066941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.067348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.067690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.067876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.068059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.068346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.068704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.068901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.069084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.069299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.069327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.069486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.069639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.069669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.069851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.069988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.070025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.070204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.070393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.070422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.070588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.070769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.070825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.071008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.071360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.071712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.071923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.072084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.072436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.072768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.072951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.073096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.073456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.073786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.073944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.109 qpair failed and we were unable to recover it. 00:21:22.109 [2024-04-24 16:17:23.074093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.109 [2024-04-24 16:17:23.074211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.074237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.074367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.074551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.074580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.074729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.074904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.074931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.075097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.075429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.075767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.075941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.076142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.076299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.076325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.076477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.076696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.076727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.076862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.077199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.077494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.077834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.077997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.078026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.078174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.078383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.078412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.078585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.078733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.078767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.078935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.079253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.079640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.079846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.079997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.080326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.080618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.080803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.080960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.081335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.081707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.081924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.082109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.082257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.082299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.082475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.082608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.082650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.082842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.082983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.083015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.083209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.083406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.083434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.083578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.083735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.083793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.083946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.084366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.084666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.084852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.084982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.085362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.085753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.085942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.086075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.086397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.086776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.086936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.087100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.087309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.087334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.087493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.087619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.087645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.087817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.088214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.088639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.088867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.088997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.089358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.089764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.089943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.090138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.090485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.090834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.090998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.091138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.091321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.091350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.091469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.091682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.091708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.091902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.092249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.092593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.092782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.092946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.093072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.093098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.093286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.093449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.093475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.110 qpair failed and we were unable to recover it. 00:21:22.110 [2024-04-24 16:17:23.093620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.110 [2024-04-24 16:17:23.093811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.093839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.093976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.094321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.094637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.094844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.094978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.095396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.095671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.095894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.096054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.096265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.096298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.096524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.096732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.096769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.096933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.097254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.097606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.097763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.097931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.098270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.098615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.098820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.098984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.099355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.099694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.099854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.099993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.100366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.100678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.100842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.101011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.101378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.101767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.101949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.102114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.102318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.102364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.102533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.102705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.102734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.102948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.103254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.103597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.103767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.103927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.104305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.104617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.104803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.104981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.105371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.105757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.105962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.106113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.106431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.106811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.106981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.107159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.107337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.107363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.107496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.107682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.107711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.107879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.108192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.108548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.108702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.108869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.109181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.109529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.109691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.109848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.110243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.111 [2024-04-24 16:17:23.110643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.111 [2024-04-24 16:17:23.110844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.111 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.110998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.111392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.111813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.111996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.112176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.112339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.112365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.112524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.112655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.112681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.112827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.113858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.113893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.114058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.114825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.114859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.115049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.115732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.115776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.115961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.116366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.116790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.116984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.117164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.117449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.117501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.117690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.117839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.117866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.118056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.118435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.118812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.118988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.119201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.119362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.119387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.119590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.119801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.119827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.119975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.120398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.120790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.120985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.121159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.121387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.121448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.121621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.121755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.121781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.121913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.122263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.122608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.122814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.122961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.123373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.123789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.123960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.124163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.124660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.124694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.124891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.125213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.125614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.125825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.125984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.126401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.126796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.126960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.127178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.127349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.127397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.127589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.127800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.127829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.128602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.128798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.128826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.128970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.129373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.129750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.129935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.130108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.130461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.130822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.130992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.131176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.131339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.131365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.131499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.131660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.131690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.131873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.132271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.132680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.132880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.133016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.133379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.133760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.133942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.134101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.134280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.134310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.134486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.134687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.134715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.112 qpair failed and we were unable to recover it. 00:21:22.112 [2024-04-24 16:17:23.134881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.135014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.112 [2024-04-24 16:17:23.135043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.135271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.135435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.135461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.135626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.135803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.135830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.135973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.136311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.136722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.136926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.137055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.137378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.137770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.137943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.138074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.138418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.138771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.138947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.139142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.139302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.139328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.139490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.139644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.139670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.139826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.139987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.140013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.140184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.140361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.140389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.140576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.140836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.140862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.140994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.141369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.141753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.141940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.142102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.142277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.142306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.142521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.142721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.142757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.142914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.143279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.143624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.143833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.143977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.144326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.144670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.144889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.145041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.145371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.145684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.145886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.146018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.146384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.146730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.146925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.147062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.147389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.147792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.147963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.148125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.148319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.148347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.148490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.148637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.148668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.148837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.148980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.149024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.149174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.149350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.149378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.149558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.149690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.149718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.149902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.150242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.150662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.150853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.150988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.151333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.151665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.151828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.151963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.152323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.152700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.152920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.153063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.153440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.153773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.153945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.154103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.154274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.154302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.154513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.154689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.154717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.113 qpair failed and we were unable to recover it. 00:21:22.113 [2024-04-24 16:17:23.154860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.113 [2024-04-24 16:17:23.154989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.155014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.155204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.155374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.155402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.155573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.155714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.155751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.156222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.156580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.156769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.156929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdadf30 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.157284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.157652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.157878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.158014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.158372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.158748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.158926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.159068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.159415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.159794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.159953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.160103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.160429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.160825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.160988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.161131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.161490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.161807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.161973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.162171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.162352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.162381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.162559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.162771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.162801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.162936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.163304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.163669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.163888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.164027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.164327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.164691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.164853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.164991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.165373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.165761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.165962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.166178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.166381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.166410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.166617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.166795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.166823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.166970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.167400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.167737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.167945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.168132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.168510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.168819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.168982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.169152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.169332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.169359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.169551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.169733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.169763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.169910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.170339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.170709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.170894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.171039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.171431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.171798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.171971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.114 qpair failed and we were unable to recover it. 00:21:22.114 [2024-04-24 16:17:23.172146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.114 [2024-04-24 16:17:23.172289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.172317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.172457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.172628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.172656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.172833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.172969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.172994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.173192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.173332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.173361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.173518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.173693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.173720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.173893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.174198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.174535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.174780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.174919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.175241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.175592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.175776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.175919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.176240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.176621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.176817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.176973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.177688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.177869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.178021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.178314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.178681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.178888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.179047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.179393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.179698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.179874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.180020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.180174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.180200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.180381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.180561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.180590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.180778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.181196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.181757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.181933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.182350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.182730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.182909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.183087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.183414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.183802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.183995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.184146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.184430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.184752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.184898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.185031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.185372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.185707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.185920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.186129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.186301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.186325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.186477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.186610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.186635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.186804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.186988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.187016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.187224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.187394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.187421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.187614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.187749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.187778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.187938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.188286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.188702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.188863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.188983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.189402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.189799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.189995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.190164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.190311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.190339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.190521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.190671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.190698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.190867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.191219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.115 [2024-04-24 16:17:23.191617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.115 [2024-04-24 16:17:23.191786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.115 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.191939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.192282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.192623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.192804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.192983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.193297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.193635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.193847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.194038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.194197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.194224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.194374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.194608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.194636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.194784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.194975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.195002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.195178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.195363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.195399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.195578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.195730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.195766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.195945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.196325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.196658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.196836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.197023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.197341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.197646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.197905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.198061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.198428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.198832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.198987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.199180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.199329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.199355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.199522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.199705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.199731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.199861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.200623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.200828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.200999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.201411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.201705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.201863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.201987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.202410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.202721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.202885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.203030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.203394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.203756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.203938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.204110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.204423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.204753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.204930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.205094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.205504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.205842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.205998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.206124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.206264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.206291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.206469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.206668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.206696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.206856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.207212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.207593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.207809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.207991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.208368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.208749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.208940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.209122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.209305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.209330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.116 [2024-04-24 16:17:23.209514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.209681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.116 [2024-04-24 16:17:23.209708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.116 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.209881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.210200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.210518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.210761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.210910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.211259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.211588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.211764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.211918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.212289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.212593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.212833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.212992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.213332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.213735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.213936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.214090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.214260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.214288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.214470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.214631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.214672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.214848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.215221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.215584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.215831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.216004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.216410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.216791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.216985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.217164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.217364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.217390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.217569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.217757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.217782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.217943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.218258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.218625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.218847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.218985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.219202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.219228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.219430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.219605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.219632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.219816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.220227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.220572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.220807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.220948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.221343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.221766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.221960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.222146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.222294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.222318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.222500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.222685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.222712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.222897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.223393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.223754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.223902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.224085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.224473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.224808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.224967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.225124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.225513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.225817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.225994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.226227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.226389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.226413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.226571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.226702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.226726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.226893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.227209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.227592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.227832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.227985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.228345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.228677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.228906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.229059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.229208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.229233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.229392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.229590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.229617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.229829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.229989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.230016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.117 qpair failed and we were unable to recover it. 00:21:22.117 [2024-04-24 16:17:23.230195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.117 [2024-04-24 16:17:23.230395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.230422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.230578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.230766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.230808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.230989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.231328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.231729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.231918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.232124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.232493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.232770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.232954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.233116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.233273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.233316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.233515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.233717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.233758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.233962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.234358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.234659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.234860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.235006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.235384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.235767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.235971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.236180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.236357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.236382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.236559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.236733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.236767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.236970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.237322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.237698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.237931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.238071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.238255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.238294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.238463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.238593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.238620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.238815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.239228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.239597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.239793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.239969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.240354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.240738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.240945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.241123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.241300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.241332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.241514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.241724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.241758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.241920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.242281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.242679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.242890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.243043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.243217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.243244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.243420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.243579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.243606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.243799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.243998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.244026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.244185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.244304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.244328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.244482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.244645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.244672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.244872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.245282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.245691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.245883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.246018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.246417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.246814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.246995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.247173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.247346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.247373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.247551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.247759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.247784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.247935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.248307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.248673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.248881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.249075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.249441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.249752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.249929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.118 qpair failed and we were unable to recover it. 00:21:22.118 [2024-04-24 16:17:23.250149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.250366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.118 [2024-04-24 16:17:23.250393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.250566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.250736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.250773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.250948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.251360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.251696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.251923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.252060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.252459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.252805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.252994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.253144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.253298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.253323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.253442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.253604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.253629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.253852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.254198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.254560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.254772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.254941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.255305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.255644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.255840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.256024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.256436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.256810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.256988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.257167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.257369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.257396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.257567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.257762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.257790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.257985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.258321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.258694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.258890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.259029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.259340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.259729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.259940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.260132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.260309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.260335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.260500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.260655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.260679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.260849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.261272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.261619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.261790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.261940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.262305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.262685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.262872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.263050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.263359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.263657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.263809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.263986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.264350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.264754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.264931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.265092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.265468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.265805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.265994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.266183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.266389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.266413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.266617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.266784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.266813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.266998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.267360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.267782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.267975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.268129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.268290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.268315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.119 qpair failed and we were unable to recover it. 00:21:22.119 [2024-04-24 16:17:23.268474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.268605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.119 [2024-04-24 16:17:23.268629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.268755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.268971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.268995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.269149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.269294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.269335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.269511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.269682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.269708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.269900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.270296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.270604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.270794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.270965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.271336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.271714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.271921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.272118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.272313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.272339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.272520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.272663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.272689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.272867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.273274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.273656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.273828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.273981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.274399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.274778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.274982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.275157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.275354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.275379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.275592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.275782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.275807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.275969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.276367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.276737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.276941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.277120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.277320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.277347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.277486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.277620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.277648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.277849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.278209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.278530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.278735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.278927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.279270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.279587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.279769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.279913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.280295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.280670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.280860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.281014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.281194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.281219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.281408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.281577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.281604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.281789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.281986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.282013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.282163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.282332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.282360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.282503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.282663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.282689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.282855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.283235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.283564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.283762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.283941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.284309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.284755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.284929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.285089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.285474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.285790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.285985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.286182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.286352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.286378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.286542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.286717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.286754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.286940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.287343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.287662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.287876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.120 [2024-04-24 16:17:23.288055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.288189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.120 [2024-04-24 16:17:23.288215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.120 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.288398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.288580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.288605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.288784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.288932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.288958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.289158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.289333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.289361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.289540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.289730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.289762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.289959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.290330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.290685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.290879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.291048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.291407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.291762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.291960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.292133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.292310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.292337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.292545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.292713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.292740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.292933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.293335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.293671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.293857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.294043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.294411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.294810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.294986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.295118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.295269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.295295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.295479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.295661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.295686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.295839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.295973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.296014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.296200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.296393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.296420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.296599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.296738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.296770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.296977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.297281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.297705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.297905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.298061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.298408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.298779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.298958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.299142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.299280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.299308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.299479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.299620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.299647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.299846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.300238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.300617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.300821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.301020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.301333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.301731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.301943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.302121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.302285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.302313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.302491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.302652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.302677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.302881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.303260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.303665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.303872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.304081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.304257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.304283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.304486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.304665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.304691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.304867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.304996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.305038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.305200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.305370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.305398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.305598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.305778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.305806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.305989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.306191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.306219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.121 qpair failed and we were unable to recover it. 00:21:22.121 [2024-04-24 16:17:23.306363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.121 [2024-04-24 16:17:23.306508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.306535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.306710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.306896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.306924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.307102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.307302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.307330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.307472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.307669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.307696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.307851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.308200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.308589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.308770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.308985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.309291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.309692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.309880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.310089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.310266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.310294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.310476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.310650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.310677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.310837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.310996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.311021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.311206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.311382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.311410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.311586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.311723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.311755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.311941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.312361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.312725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.312912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.313110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.313281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.313308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.313508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.313686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.313713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.313935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.314280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.314676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.314839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.315026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.315424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.315770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.315925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.316109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.316284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.316311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.316480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.316649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.316676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.316823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.316991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.317179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.317492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.317833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.317990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.318146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.318348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.318375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.318572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.318713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.318740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.318929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.319339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.319678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.319838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.319965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.320299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.320636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.320785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.320968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.321367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.321702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.321933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.322110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.322315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.322339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.322461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.322672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.322700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.322885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.323284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.323627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.323846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.324021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.324331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.324704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.324913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.325131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.325273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.325302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.325461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.325627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.325653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.325807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.325992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.326017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.122 qpair failed and we were unable to recover it. 00:21:22.122 [2024-04-24 16:17:23.326185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.122 [2024-04-24 16:17:23.326380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.326408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.326578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.326712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.326738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.326910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.327285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.327659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.327861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.328042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.328399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.328791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.328993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.329174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.329333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.329357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.329556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.329736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.329766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.329975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.330379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.330768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.330976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.331195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.331393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.331420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.331603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.331781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.331810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.332015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.332403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.332798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.332996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.333169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.333348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.333375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.333576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.333765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.333790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.333971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.334376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.334715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.334964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.335115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.335264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.335306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.335482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.335680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.335707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.335868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.336288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.336598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.336789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.336947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.337339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.337748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.337934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.338086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.338286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.338314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.338472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.338596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.338622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.338823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.339238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.339652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.339861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.339982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.340358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.340790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.340970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.341099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.341258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.341285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.341491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.341694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.341721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.341904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.342243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.342590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.342780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.342984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.343386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.343730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.343945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.344083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.344457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.344775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.344979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.345121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.345320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.345348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.345529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.345712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.345761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.345939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.346121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.346149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.123 qpair failed and we were unable to recover it. 00:21:22.123 [2024-04-24 16:17:23.346323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.346532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.123 [2024-04-24 16:17:23.346559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.346730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.346916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.346941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.347127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.347291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.347319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.347492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.347613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.347638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.347837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.347986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.348012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.348168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.348370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.348398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.348731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.348782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.348959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.349317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.349678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.349871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.349995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.350353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.350734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.350966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.351108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.351284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.351311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.351487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.351656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.351685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.351870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.352259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.352599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.352796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.352932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.353265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.353596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.353799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.353930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.354263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.354643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.354854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.355012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.355417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.355770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.355976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.356156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.356357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.356384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.356552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.356735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.356766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.356928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.357296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.357679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.357903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.358063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.358454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.358798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.358990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.359164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.359362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.359388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.359571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.359750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.359779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.359979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.360398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.360750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.360987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.361188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.361386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.361414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.361615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.361808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.361834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.361999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.362344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.362759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.362960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.363112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.363270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.363294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.363473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.363665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.363693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.363900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.364211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.364630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.364822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.365012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.365173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.365198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.365333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.365492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.365533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.124 qpair failed and we were unable to recover it. 00:21:22.124 [2024-04-24 16:17:23.365738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.124 [2024-04-24 16:17:23.365880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.365910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.366081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.366255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.366282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.366442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.366600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.366642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.366833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.366982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.367023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.367208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.367370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.367394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.367551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.367760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.367789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.367962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.368327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.368709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.368897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.369018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.369450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.369798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.369957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.370111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.370249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.370276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.370476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.370635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.370663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.370844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.371222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.371548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.371756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.371953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.372345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.372754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.372940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.373126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.373331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.373358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.373556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.373756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.373783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.373941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.374313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.374664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.374867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.375032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.375417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.375753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.375953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.376180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.376362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.376392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.376541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.376727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.376762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.376948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.377366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.377758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.377920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.378134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.378337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.378366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.378518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.378709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.378735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.378911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.379344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.379726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.125 [2024-04-24 16:17:23.379948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.125 qpair failed and we were unable to recover it. 00:21:22.125 [2024-04-24 16:17:23.380116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.399 [2024-04-24 16:17:23.380276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.380302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.380438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.380623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.380647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.380816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.381193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.381594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.381781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.381999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.382420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.382725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.382894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.383078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.383438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.383782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.383958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.384121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.384242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.384266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.384426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.384638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.384668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.384854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.385263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.385628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.385808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.385962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.386340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.386751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.386962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.387175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.387373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.387397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.387606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.387779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.387809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.387953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.388297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.388739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.388947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.389125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.389333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.389361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.389492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.389696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.389721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.389888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.390255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.390624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.390834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.391013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.391391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.391756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.391980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.392157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.392316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.392344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.392526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.392697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.392726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.392916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.393293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.393689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.393915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.394092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.394252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.394281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.394485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.394662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.394690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.394898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.395257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.395577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.395792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.395974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.396292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.396663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.396896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.397031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.397407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.397726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.397955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.398106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.398429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.398720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.398950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.399122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.399279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.399319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.399518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.399693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.399721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.399907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.400260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.400638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.400816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.400960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.401282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.401630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.401832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.401999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.402337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.402709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.402906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.400 qpair failed and we were unable to recover it. 00:21:22.400 [2024-04-24 16:17:23.403057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 16:17:23.403256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.403283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.403439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.403594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.403620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.403805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.403948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.403975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.404187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.404366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.404395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.404544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.404685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.404711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.404881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.405177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.405524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.405729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.405916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.406244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.406593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.406782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.406936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.407240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.407616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.407827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.407968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.408380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.408734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.408929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.409280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.409305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.409481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.409677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.409710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.409898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.410260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.410650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.410810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.410949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.411350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.411738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.411935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.412105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.412261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.412302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.412473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.412666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.412722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.412913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.413260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.413636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.413810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.413986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.414372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.414658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.414835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.414999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.415332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.415667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.415869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.416051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.416266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.416317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.416503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.416659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.416684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.416874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.417255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.417685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.417874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.418001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.418384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.418790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.418952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.419089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.419494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.419779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.419922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.420053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.420445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.420782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.420973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.421129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.421324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.421351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.421542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.421703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.421728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.421866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.422219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.422564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.422778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.422933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.423256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.423580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.423787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.423949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.424305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.424649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.424820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.401 [2024-04-24 16:17:23.424979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.425107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 16:17:23.425131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.401 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.425347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.425529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.425555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.425692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.425827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.425852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.426041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.426424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.426802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.426945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.427079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.427456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.427783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.427996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.428128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.428283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.428308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.428464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.428625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.428649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.428823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.428991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.429220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.429510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.429825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.429991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.430162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.430445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.430796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.430976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.431001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.431162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.431306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.431351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.431522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.431694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.431723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.431902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.432245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.432589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.432780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.432960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.433398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.433670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.433855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.434010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.434376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.434716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.434928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.435065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.435445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.435738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.435894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.436044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.436415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.436799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.436984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.437121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.437507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.437817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.437993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.438152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.438310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.438334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.438503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.438681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.438708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.438881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.439196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.439533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.439730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.439916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.440337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.440675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.440899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.441032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.441329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.441682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.441894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.442025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.442391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.442810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.442992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.443117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.443467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.443794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.443955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.444154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.444279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.444305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.444495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.444646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.444672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.444816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.444996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.445021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.402 qpair failed and we were unable to recover it. 00:21:22.402 [2024-04-24 16:17:23.445173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 16:17:23.445354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.445381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.445550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.445733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.445766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.445904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.446187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.446584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.446782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.446945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.447296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.447682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.447866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.448007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.448425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.448832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.448973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.449133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.449490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.449829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.449996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.450132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.450288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.450312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.450468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.450587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.450613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.450822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.450980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.451006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.451205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.451387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.451410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.451571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.451748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.451777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.451957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.452270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.452606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.452791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.452943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.453298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.453669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.453896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.454037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.454409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.454731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.454939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.455112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.455474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.455819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.455990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.456122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.456304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.456330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.456510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.456653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.456680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.456844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.457180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.457481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.457830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.457980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.458212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.458385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.458410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.458560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.458766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.458811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.458949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.459313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.459641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.459847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.459972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.460285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.460617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.460836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.461003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.461352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.461691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.461887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.462043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.462435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.462729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.462947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.463117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.463275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.463299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.463481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.463618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.463644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.463844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.464219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.464498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.464728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.464908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.465200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.465603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.465797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.465937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.466276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.466628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.466857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.467008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.467167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.467192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.403 qpair failed and we were unable to recover it. 00:21:22.403 [2024-04-24 16:17:23.467396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.403 [2024-04-24 16:17:23.467543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.467570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.467720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.467880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.467908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.468051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.468452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.468776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.468937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.469134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.469255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.469279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.469464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.469615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.469642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.469825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.469981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.470011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.470189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.470360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.470387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.470519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.470697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.470724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.470919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.471242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.471598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.471760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.471963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.472310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.472644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.472835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.473015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.473327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.473697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.473874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.474020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.474333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.474638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.474793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.474980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.475310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.475646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.475865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.476047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.476437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.476817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.476982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.477184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.477328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.477355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.477550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.477718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.477771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.477966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.478356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.478679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.478863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.479001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.479380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.479701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.479897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.480064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.480390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.480762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.480951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.481085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.481398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.481769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.481987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.482174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.482331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.482356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.482507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.482653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.482678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.482861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.483204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.483538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.483768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.483933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.484263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.484654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.484840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.484998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.485388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.485707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.485883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.486023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.486201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.486228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.404 [2024-04-24 16:17:23.486409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.486570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.404 [2024-04-24 16:17:23.486599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.404 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.486754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.486881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.486909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.487116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.487293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.487321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.487502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.487702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.487729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.487905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.488225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.488569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.488753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.488924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.489281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.489588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.489738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.489945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.490361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.490682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.490902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.491065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.491427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.491778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.491946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.492114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.492492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.492775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.492992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.493182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.493339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.493382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.493590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.493768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.493795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.493975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.494285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.494622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.494853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.495025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.495391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.495755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.495984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.496137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.496277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.496304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.496463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.496662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.496690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.496848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.497235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.497591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.497827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.497985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.498346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.498748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.498937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.499154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.499331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.499357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.499522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.499723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.499759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.499917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.500290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.405 qpair failed and we were unable to recover it. 00:21:22.405 [2024-04-24 16:17:23.500631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.405 [2024-04-24 16:17:23.500835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.500983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.501364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.501656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.501814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.501972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.502317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.502700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.502891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.503085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.503328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.503352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.503507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.503663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.503688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.503883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.504231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.504554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.504777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.504950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.505326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.505675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.505860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.506055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.506346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.506758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.506961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.507111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.507292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.507318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.507496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.507655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.507679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.508256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.508634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.508840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.508982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.509343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.509698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.509911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.510060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.510356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.510638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.430 [2024-04-24 16:17:23.510856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.430 qpair failed and we were unable to recover it. 00:21:22.430 [2024-04-24 16:17:23.511016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.511403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.511748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.511949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.512098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.512490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.512825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.512975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.513155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.513339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.513367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.513540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.513712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.513738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.513910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.514272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.514629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.514811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.514957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.515339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.515670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.515828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.515962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.516321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.516664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.516840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.517032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.517361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.517726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.517915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.518090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.518446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.518807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.518998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.519126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.519319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.519347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.519492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.519653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.519678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.519862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.519990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.520015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.520205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.520385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.520417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.431 qpair failed and we were unable to recover it. 00:21:22.431 [2024-04-24 16:17:23.520592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.431 [2024-04-24 16:17:23.520766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.520794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.520956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.521289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.521657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.521889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.522024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.522687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.522897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.523071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.523416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.523792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.523986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.524133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.524294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.524318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.524453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.524604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.524629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.524797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.524974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.525002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.525192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.525349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.525374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.525529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.525686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.525711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.525869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.526251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.526604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.526782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.526958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.527277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.527668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.527838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.528036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.528411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.528747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.528915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.529079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.529435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.529762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.529923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.530044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.530374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.530658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.530852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.530996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.531143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.432 [2024-04-24 16:17:23.531184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.432 qpair failed and we were unable to recover it. 00:21:22.432 [2024-04-24 16:17:23.531358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.531525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.531553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.531753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.531931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.531958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.532120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.532292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.532320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.532517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.532683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.532710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.532876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.533202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.533571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.533753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.533956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.534276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.534683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.534841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.534986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.535396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.535720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.535877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.536042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.536420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.536756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.536925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.537092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.537451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.537821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.537975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.538145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.538330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.538359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.538529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.538671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.538700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.538921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.539242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.539625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.539806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.539980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.540346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.540722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.540910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.541075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.541411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.541761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.541955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.542128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.542292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.542317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.542507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.542631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.542671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.542854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.543170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.543524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.543710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.543928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.544149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.544198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.544400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.544604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.544632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.544830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.544991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.545193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.545538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.545879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.545995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.546021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.546138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.546301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.546326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.546507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.546685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.546712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.546919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.547054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.547096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.433 qpair failed and we were unable to recover it. 00:21:22.433 [2024-04-24 16:17:23.547269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.433 [2024-04-24 16:17:23.547419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.547445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.547585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.547746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.547772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.547909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.548288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.548641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.548849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.549018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.549431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.549737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.549942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.550094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.550455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.550796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.550998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.551181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.551310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.551334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.551494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.551641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.551666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.551862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.552280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.552603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.552781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.552990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.553325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.553622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.553791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.553945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.554281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.554616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.554808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.554988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.555368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.555713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.555993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.556200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.556361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.556392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.556575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.556722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.556756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.556916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.557223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.557619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.557818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.557969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.558328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.558627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.558809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.559016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.559382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.559737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.559946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.560118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.560456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.560788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.560959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.561092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.561411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.561753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.561955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.562111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.562241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.562267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.434 qpair failed and we were unable to recover it. 00:21:22.434 [2024-04-24 16:17:23.562448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.434 [2024-04-24 16:17:23.562573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.562599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.562719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.562851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.562877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.563040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.563465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.563807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.563988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.564175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.564361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.564385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.564532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.564707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.564738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.564907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.565260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.565602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.565812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.565948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.566280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.566683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.566856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.567005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.567353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.567755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.567957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.568116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.568397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.568760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.568931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.569135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.569283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.569311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.569468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.569642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.569669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.569815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.569997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.570022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.570180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.570405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.570434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.570613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.570791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.570820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.570973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.571353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.571675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.571912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.572051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.572360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.572709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.572898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.573035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.573405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.573767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.573957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.574100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.574298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.574325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.574532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.574659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.574685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.574886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.575204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.575534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.575777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.575901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.576235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.576575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.576775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.576920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.577271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.577624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.577788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.577913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.578313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.578703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.578870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.579007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.579366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.579674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.579885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.435 [2024-04-24 16:17:23.580083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.580271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.435 [2024-04-24 16:17:23.580296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.435 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.580443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.580594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.580623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.580795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.580927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.580973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.581149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.581302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.581328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.581465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.581664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.581692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.581903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.582260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.582631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.582815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.582940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.583331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.583684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.583873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.584017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.584362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.584758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.584962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.585142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.585295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.585320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.585436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.585640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.585667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.585874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.586213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.586583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.586736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.586876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.587230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.587561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.587785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.588000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.588329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.588709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.588920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.589064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.589269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.589297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.589461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.589618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.589645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.589829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.589968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.590010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.590183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.590350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.590376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.590523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.590656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.590685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.590869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.591213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.591611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.591821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.592005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.592323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.592636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.592831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.592966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.593291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.593625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.593790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.593935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.594259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.594643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.594815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.594997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.595402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.595755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.595960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.596106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.596297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.596339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.596511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.596669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.596711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.596866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.597236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.597653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.597893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.598042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.598223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.598248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.436 [2024-04-24 16:17:23.598370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.598535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.436 [2024-04-24 16:17:23.598562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.436 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.598764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.598922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.598948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.599114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.599319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.599346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.599517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.599675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.599700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.599832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.599996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.600020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.600210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.600405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.600430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.600593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.600731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.600765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.600904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.601231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.601536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.601775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.601948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.602406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.602763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.602971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.603150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.603302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.603330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.603513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.603693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.603718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.603861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.604236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.604638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.604870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.605074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.605455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.605770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.605972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.606096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.606305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.606467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.606671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.606701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.606912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.607319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.607707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.607879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.608024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.608208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.608232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.608373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.608578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.608606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.608816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.608974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.609147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.609465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.609819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.609980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.610148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.610309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.610335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.610494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.610651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.610676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.610850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.611207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.611547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.611709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.611847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.612169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.612518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.612693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.612860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.613188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.613604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.613804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.613951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.614328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.614685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.614866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.615024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.615169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.615196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.615379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.615512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.615537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.437 qpair failed and we were unable to recover it. 00:21:22.437 [2024-04-24 16:17:23.615668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.437 [2024-04-24 16:17:23.615829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.615855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.616023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.616400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.616710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.616888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.617054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.617348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.617708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.617886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.618047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.618351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.618752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.618955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.619098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.619264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.619293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.619472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.619627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.619651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.619871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.620268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.620568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.620756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.620901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.621219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.621622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.621812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.621946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.622287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.622588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.622729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.622875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.623260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.623664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.623847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.624024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.624365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.624763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.624957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.625133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.625314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.625342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.625525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.625672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.625697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.625837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.625996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.626021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.626183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.626342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.626368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.626494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.626674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.626701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.626909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.627292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.627641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.627797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.628010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.628387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.628761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.628946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.629097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.629304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.629331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.629492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.629680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.629706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.629887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.630264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.630642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.630847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.631030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.631372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.631731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.631929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.632069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.632419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.632797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.632991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.438 qpair failed and we were unable to recover it. 00:21:22.438 [2024-04-24 16:17:23.633163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.438 [2024-04-24 16:17:23.633359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.633387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.633539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.633703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.633731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.633901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.634261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.634632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.634830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.634988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.635289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.635576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.635807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.635969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.636387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.636710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.636905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.637033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.637428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.637780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.637965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.638113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.638431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.638780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.638952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.639143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.639293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.639318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.639469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.639670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.639696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.639863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.640269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.640602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.640782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.640923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.641239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.641634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.641819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.641954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.642328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.642703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.642919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.643076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.643413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.643808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.643990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.644192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.644349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.644376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.644545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.644685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.644713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.644868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.645236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.645575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.645822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.645983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.646269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.646576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.646779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.646913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.647280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.647729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.647932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.648116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.648236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.648261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.648423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.648619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.648644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.648812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.648989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.649017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.649189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.649365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.649392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.649574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.649736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.649768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.649928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.650230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.650597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.650792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.650975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.651382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.651685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.651916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.652112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.652295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.652321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.652455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.652607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.652632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.652818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.652998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.653024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.653160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.653323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.653354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.653532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.653690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.653719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.653933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.654088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.654116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.439 [2024-04-24 16:17:23.654311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.654481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.439 [2024-04-24 16:17:23.654524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.439 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.654659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.654826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.654853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.654993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.655361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.655697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.655910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.656094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.656270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.656313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.656462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.656638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.656664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.656863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.657296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.657629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.657861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.658024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.658470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.658796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.658954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.659144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.659339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.659382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.659548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.659685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.659719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.659881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.660262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.660548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.660760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.660950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.661307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.661670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.661891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.662050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.662277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.662320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.662507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.662682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.662708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.662878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.663240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.663633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.663859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.664006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.664174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.664219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.664432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.664588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.664614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.664793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.664988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.665016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.665209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.665362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.665389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.665536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.665671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.665698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.665904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.666310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.666638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.666875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.667052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.667432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.667763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.667994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.668194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.668349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.668375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.668532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.668678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.668713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.668881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.669049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.440 [2024-04-24 16:17:23.669090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.440 qpair failed and we were unable to recover it. 00:21:22.440 [2024-04-24 16:17:23.669282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.669465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.669498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.714 qpair failed and we were unable to recover it. 00:21:22.714 [2024-04-24 16:17:23.669649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.669841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.669886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.714 qpair failed and we were unable to recover it. 00:21:22.714 [2024-04-24 16:17:23.670073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.670237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.670280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.714 qpair failed and we were unable to recover it. 00:21:22.714 [2024-04-24 16:17:23.670462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.670594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.714 [2024-04-24 16:17:23.670619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.714 qpair failed and we were unable to recover it. 00:21:22.714 [2024-04-24 16:17:23.670758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.670947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.671169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.671345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.671372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.671508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.671637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.671663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.671827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.672244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.672553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.672717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.672891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.673220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.673548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.673704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.673880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.674296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.674638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.674809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.674942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.675317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.675604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.675795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.675951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.676376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.676695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.676932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.677107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.677373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.677421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.677564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.677727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.677759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.677904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.678299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.678711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.678922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.679101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.679301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.679344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.679504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.679665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.679691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.679824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.679981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.680171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.680491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.715 qpair failed and we were unable to recover it. 00:21:22.715 [2024-04-24 16:17:23.680802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.715 [2024-04-24 16:17:23.680963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.680989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.681148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.681457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.681787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.681950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.682126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.682299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.682325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.682452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.682617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.682642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.682822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.683269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.683610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.683862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.684034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.684436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.684739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.684917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.685076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.685416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.685695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.685933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.686053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.686453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.686791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.686986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.687145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.687323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.687350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.687480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.687662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.687687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.687828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.688248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.688610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.688814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.689008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.689243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.689285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.689469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.689597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.689630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.689817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.690196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.690562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.690760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.690906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.691097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.716 [2024-04-24 16:17:23.691147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.716 qpair failed and we were unable to recover it. 00:21:22.716 [2024-04-24 16:17:23.691309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.691472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.691499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.691684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.691845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.691891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.692067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.692251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.692294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.692484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.692619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.692644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.692791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.693198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.693558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.693709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.693882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.694283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.694630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.694793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.694938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.695370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.695681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.695914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.696074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.696275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.696321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.696440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.696594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.696620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.696841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.697220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.697529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.697720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.698290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.698612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.698777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.698953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.699402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.699752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.699946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.700113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.700320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.700363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.700522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.700681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.700706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.700867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.701237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.717 qpair failed and we were unable to recover it. 00:21:22.717 [2024-04-24 16:17:23.701638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.717 [2024-04-24 16:17:23.701862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.702016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.702457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.702816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.702993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.703152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.703354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.703397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.703596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.703755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.703801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.703986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.704179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.704222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.704409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.704583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.704610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.704809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.705274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.705667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.705864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.706041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.706212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.706240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.706451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.706610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.706635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.706830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.707207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.707542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.707722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.707933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.708363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.708754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.708937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.709098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.709281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.709328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.709507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.709643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.709668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.709854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.710226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.710569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.710735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.710890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.711355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.711696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.711912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.712056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.712269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.712315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.712476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.712636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.712661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.718 qpair failed and we were unable to recover it. 00:21:22.718 [2024-04-24 16:17:23.712873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.718 [2024-04-24 16:17:23.713016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.713070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.713232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.713404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.713432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.713574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.713740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.713771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.713921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.714296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.714660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.714871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.715063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.715249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.715292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.715480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.715653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.715679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.715863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.716266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.716643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.716849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.717016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.717219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.717263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.717443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.717619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.717645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.717797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.717974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.718017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.718209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.718368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.718395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.718582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.718746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.718772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.718940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.719354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.719721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.719970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.720213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.720416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.720459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.720607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.720765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.720792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.720948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.721120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.721163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.721315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.721489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.721514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.719 qpair failed and we were unable to recover it. 00:21:22.719 [2024-04-24 16:17:23.721664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.719 [2024-04-24 16:17:23.721851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.721896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.722305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.722348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.722487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.722636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.722661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.722847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.723214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.723594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.723785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.723928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.724316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.724897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.725078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.725249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.725292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.725458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.725615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.725640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.725819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.726207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.726651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.726818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.726993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.727410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.727991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.728179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.728368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.728411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.728599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.728756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.728785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.728962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.729191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.729235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.729401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.729564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.729590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.729770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.729968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.730012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.730209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.730358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.730385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.730540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.730661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.730688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.730886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.731296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.731626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.731856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.732020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.732227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.732269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.732429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.732619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.720 [2024-04-24 16:17:23.732645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.720 qpair failed and we were unable to recover it. 00:21:22.720 [2024-04-24 16:17:23.732833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.732988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.733013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.733194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.733342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.733368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.733531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.733663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.733688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.733857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.734275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.734633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.734880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.735055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.735285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.735329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.735508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.735654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.735679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.735877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.736296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.736655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.736890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.737072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.737260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.737302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.737457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.737614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.737639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.737819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.738258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.738608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.738787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.738977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.739172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.739215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.739390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.739576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.739601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.739769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.739972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.740017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.740224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.740398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.740424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.740583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.740767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.740793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.740979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.741209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.741252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.741418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.741612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.741637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.741824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.742277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.742645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.742846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.743013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.743349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.721 [2024-04-24 16:17:23.743642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.721 [2024-04-24 16:17:23.743825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.721 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.744023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.744445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.744770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.744985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.745192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.745364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.745388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.745572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.745705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.745737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.745953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.746377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.746748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.746947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.747162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.747361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.747403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.747586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.747759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.747785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.747948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.748406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.748762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.748989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.749144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.749341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.749384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.749547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.749704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.749730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.749948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.750364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.750722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.750951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.751161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.751368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.751413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.751575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.751765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.751796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.751972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.752402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.752750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.752911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.753097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.753258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.753302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.753443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.753617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.753641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.753819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.754261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.754691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.754949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.722 qpair failed and we were unable to recover it. 00:21:22.722 [2024-04-24 16:17:23.755129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.755354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.722 [2024-04-24 16:17:23.755396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.755587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.755764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.755794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.755952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.756368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.756739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.756974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.757147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.757315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.757358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.757538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.757706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.757731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.757934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.758358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.758717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.758923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.759104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.759302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.759346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.759501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.759657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.759686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.759868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.760056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.760098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.760278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.760577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.760633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.760841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.761249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.761601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.761766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.761951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.762302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.762667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.762901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.763047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.763442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.763755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.763974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.764188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.764335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.764362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.764521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.764700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.764726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.764900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.765310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.723 qpair failed and we were unable to recover it. 00:21:22.723 [2024-04-24 16:17:23.765678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.723 [2024-04-24 16:17:23.765920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.766104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.766261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.766289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.766483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.766656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.766681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.766880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.767137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.767191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.767548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.767703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.767728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.767913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.768315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.768751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.768943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.769139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.769303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.769346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.769532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.769705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.769731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.769900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.770290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.770750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.770901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.771084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.771277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.771319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.771502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.771709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.771734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.771902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.772319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.772716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.772931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.773086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.773307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.773349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.773559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.773735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.773768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.773904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.774284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.774712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.774918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.775097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.775298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.775340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.775533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.775664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.775690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.775907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.776106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.776148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.776340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.776515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.776541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.724 qpair failed and we were unable to recover it. 00:21:22.724 [2024-04-24 16:17:23.776716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.724 [2024-04-24 16:17:23.776905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.776947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.777134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.777359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.777402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.777585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.777750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.777776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.777931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.778090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.778131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.778312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.778537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.778588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.778800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.778973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.779017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.779192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.779417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.779459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.779617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.779754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.779780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.779938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.780350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.780933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.781152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.781308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.781332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.781531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.781691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.781717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.781870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.782198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.782541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.782727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.782922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.783316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.783704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.783916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.784113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.784277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.784321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.784479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.784636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.784662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.784850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.785282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.785632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.785887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.786109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.786329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.786371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.786562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.786755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.786893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.787338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.787676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.787890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.788071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.788249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.725 [2024-04-24 16:17:23.788293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.725 qpair failed and we were unable to recover it. 00:21:22.725 [2024-04-24 16:17:23.788475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.788656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.788681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.788873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.789274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.789674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.789873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.790042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.790438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.790755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.790983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.791143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.791291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.791318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.791483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.791646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.791672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.791862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.792268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.792620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.792799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.792931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.793297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.793660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.793825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.794011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.794379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.794740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.794968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.795142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.795310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.795338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.795539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.795695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.795720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.795913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.796368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.796702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.796941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.797140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.797421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.797474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.797632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.797809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.797836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.798010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.798431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.798759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.798947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.799163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.799392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.799435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.799570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.799733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.799783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.799936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.800138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.726 [2024-04-24 16:17:23.800181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.726 qpair failed and we were unable to recover it. 00:21:22.726 [2024-04-24 16:17:23.800347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.800511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.800537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.800700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.800875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.800919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.801111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.801332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.801380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.801571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.801730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.801926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.802328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.802697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.802898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.803096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.803313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.803368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.803584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.803749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.803777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.803965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.804355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.804708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.804897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.805113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.805303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.805345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.805526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.805662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.805687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.805878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.806352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.806685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.806931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.807146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.807342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.807384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.807563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.807722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.807764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.807920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.808353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.808705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.808903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.809109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.809301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.809344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.809496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.809653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.809679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.809869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.810295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.810589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.810779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.810955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.811332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.811708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.811943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.812138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.812335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.812377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.727 qpair failed and we were unable to recover it. 00:21:22.727 [2024-04-24 16:17:23.812505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.727 [2024-04-24 16:17:23.812634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.812659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.812836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.813257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.813596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.813824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.814002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.814238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.814265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.814433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.814589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.814615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.814793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.814971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.815022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.815229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.815380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.815405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.815565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.815721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.815754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.815957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.816323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.816704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.816939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.817121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.817315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.817343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.817520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.817666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.817692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.817890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.818290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.818648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.818904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.819090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.819290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.819333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.819472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.819628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.819653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.819831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.820248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.820647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.820853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.821043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.821416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.821706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.821949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.822161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.822374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.822401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.822523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.822646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.822676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.822837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.822984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.823011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.728 qpair failed and we were unable to recover it. 00:21:22.728 [2024-04-24 16:17:23.823196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.823382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.728 [2024-04-24 16:17:23.823407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.823564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.823764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.823791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.824000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.824389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.824755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.824981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.825135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.825325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.825367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.825509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.825672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.825697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.825886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.826305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.826650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.826876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.827062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.827297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.827324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.827528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.827662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.827687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.827860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.828307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.828624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.828792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.828968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.829356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.829671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.829870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.830031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.830221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.830263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.830458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.830614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.830639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.830809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.830998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.831025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.831210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.831379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.831404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.831586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.831786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.831815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.832028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.832234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.832276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.832435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.832592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.832617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.832805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.833233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.833609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.833855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.834065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.834266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.834307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.834497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.834648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.834674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.834874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.835339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.835700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.835910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.729 qpair failed and we were unable to recover it. 00:21:22.729 [2024-04-24 16:17:23.836090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.729 [2024-04-24 16:17:23.836278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.836320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.836531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.836681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.836706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.836911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.837341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.837709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.837947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.838114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.838335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.838382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.838507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.838673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.838699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.838892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.839319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.839687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.839906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.840100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.840343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.840395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.840579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.840764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.840799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.840976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.841731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.841967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.842165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.842318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.842362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.842554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.842714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.842746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.842967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.843327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.843671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.843867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.844028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.844249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.844293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.844481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.844658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.844685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.844887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.845342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.845735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.845970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.846185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.846356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.846397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.846564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.846724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.846755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.846936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.847358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.847683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.847926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.848080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.848308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.848349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.848505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.848688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.848713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.848866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.849091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.849134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.730 [2024-04-24 16:17:23.849343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.849519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.730 [2024-04-24 16:17:23.849546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.730 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.849682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.849863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.849907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.850086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.850367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.850420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.850610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.850770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.850797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.850994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.851188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.851230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.851408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.851581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.851607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.851811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.851980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.852023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.852247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.852395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.852420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.852569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.852767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.852794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.852946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.853349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.853793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.853981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.854225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.854435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.854477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.854613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.854775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.854802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.854986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.855187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.855229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.855442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.855616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.855641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.855798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.855960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.856002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.856193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.856366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.856419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.856576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.856710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.856738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.856931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.857356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.857673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.857904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.858112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.858329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.858356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.858515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.858707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.858732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.858907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.859373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.859703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.859966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.860269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.860624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.860694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.860900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.861295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.861672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.861898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.862079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.862307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.862349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.862504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.862660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.862685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.862872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.863065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.731 [2024-04-24 16:17:23.863108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.731 qpair failed and we were unable to recover it. 00:21:22.731 [2024-04-24 16:17:23.863289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.863492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.863534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.863727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.863916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.863960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.864144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.864333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.864376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.864574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.864758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.864785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.864931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.865385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.865685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.865881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.866086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.866266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.866308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.866474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.866630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.866656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.866836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.867233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.867559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.867774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.867937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.868331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.868740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.868981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.869158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.869351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.869393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.869529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.869684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.869710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.869868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.870329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.870755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.870970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.871192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.871378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.871421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.871584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.871737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.871769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.871951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.872140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.872184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.872604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.872647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.872784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.872966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.873009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.873220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.873421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.873463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.873624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.873751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.873778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.873934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.874339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.732 qpair failed and we were unable to recover it. 00:21:22.732 [2024-04-24 16:17:23.874750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.732 [2024-04-24 16:17:23.874966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.875115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.875335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.875376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.875494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.875674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.875699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.875887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.876266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.876667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.876892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.877091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.877240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.877284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.877438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.877594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.877621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.877811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.878216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.878564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.878758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.878945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.879326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.879688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.879897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.880062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.880232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.880277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.880459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.880617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.880642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.880820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.881205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.881599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.881778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.881945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.882175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.882218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.882408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.882610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.882640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.882821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.883273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.883601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.883785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.883966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.884391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.884722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.884996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.885170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.885361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.885404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.885591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.885752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.885788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.885952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.886156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.886200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.886383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.886561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.886593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.886807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.887210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.887595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.887821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.887998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.888192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.888233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.888395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.888554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.733 [2024-04-24 16:17:23.888580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.733 qpair failed and we were unable to recover it. 00:21:22.733 [2024-04-24 16:17:23.888732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.888918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.888960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.889167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.889366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.889409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.889546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.889726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.889757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.889930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.890145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.890187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.890399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.890600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.890626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.890807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.891236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.891573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.891766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.891926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.892377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.892732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.892939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.893115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.893349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.893391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.893518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.893668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.893693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.893870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.894280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.894631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.894850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.895040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.895235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.895282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.895445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.895606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.895633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.895788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.895986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.896029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.896218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.896374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.896400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.896557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.896695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.896722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.896878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.897306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.897618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.897772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.897950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.898309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.898647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.898857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.899039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.899398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.899746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.899954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.900109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.900305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.900351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.900537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.900698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.900724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.900899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.901355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.901709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.901921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.902080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.902269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.734 [2024-04-24 16:17:23.902312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.734 qpair failed and we were unable to recover it. 00:21:22.734 [2024-04-24 16:17:23.902458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.902594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.902620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.902801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.902970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.903017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.903228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.903381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.903414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.903589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.903756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.903782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.903952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.904358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.904656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.904845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.905027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.905242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.905293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.905456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.905639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.905665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.905821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.906234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.906558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.906720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.906898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.907312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.907643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.907866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.908018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.908358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.908691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.908891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.909046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.909275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.909318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.909456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.909616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.909643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.909805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.909970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.910018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.910179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.910380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.910406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.910564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.910686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.910711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.910901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.911328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.911677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.911895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.912098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.912505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.912806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.912985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.913172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.913313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.913340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.913492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.913637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.913663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.913850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.914262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.735 qpair failed and we were unable to recover it. 00:21:22.735 [2024-04-24 16:17:23.914584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.735 [2024-04-24 16:17:23.914747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.914899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.915275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.915629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.915791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.915951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.916343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.916628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.916796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.916961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.917396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.917703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.917927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.918109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.918303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.918354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.918521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.918682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.918707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.918864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.919292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.919638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.919788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.919953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.920349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.920705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.920927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.921087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.921292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.921335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.921472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.921627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.921655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.921809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.921991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.922035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.922197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.922377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.922402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.922584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.922712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.922737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.922897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.923309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.923631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.923795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.923958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.924380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.924694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.924903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.925054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.925421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.925735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.925944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.926117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.926281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.926325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.926483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.926638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.926664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.926875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.927330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.927631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.927861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.736 [2024-04-24 16:17:23.928047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.928252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.736 [2024-04-24 16:17:23.928295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.736 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.928427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.928560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.928586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.928752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.928906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.928957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.929151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.929334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.929361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.929501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.929661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.929686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.929848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.930275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.930642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.930804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.930954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.931325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.931666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.931856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.932004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.932388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.932663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.932850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.933043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.933452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.933765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.933992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.934185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.934328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.934352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.934513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.934671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.934698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.934855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.935269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.935568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.935763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.935896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.936333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.936686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.936878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.937023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.937433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.937753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.937960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.938102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.938307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.938351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.938502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.938659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.938698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.938879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.939298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.939613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.939778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.939939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.940351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.940719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.940970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.941122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.941290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.941332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.941503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.941664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.941690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.941899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.942074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.737 [2024-04-24 16:17:23.942117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.737 qpair failed and we were unable to recover it. 00:21:22.737 [2024-04-24 16:17:23.942297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.942491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.942546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.942677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.942864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.942908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.943052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.943215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.943259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.943430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.943619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.943646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.943846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.944238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.944574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.944739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.944911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.945304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.945664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.945848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.946042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.946245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.946293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.946468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.946630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.946657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.946816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.946997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.947040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.947197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.947368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.947394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.947551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.947681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.947707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.947880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.948256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.948639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.948795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.948980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.949160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.949203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.949422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.949598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.949625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.949830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.950234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.950551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.950712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.950892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.951301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.951631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.951883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.952066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.952297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.952339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.952475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.952660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.952687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.952868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.953266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.953575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.953772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.953956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.954378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.954702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.954905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.955054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.955224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.955266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.955414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.955614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.955639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.955791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.955965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.956009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.956190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.956375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.956401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.738 qpair failed and we were unable to recover it. 00:21:22.738 [2024-04-24 16:17:23.956588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.738 [2024-04-24 16:17:23.956758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.956784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.956943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.957372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.957732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.957987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.958150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.958354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.958397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.958562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.958719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.958749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.958919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.959354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.959696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.959898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.960106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.960339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.960387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.960543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.960673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.960699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.960899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.961323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.961704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.961877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.962067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.962235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.962262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.962455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.962608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.962634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.962786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.963254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.963616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.963817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.963981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.964381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.964752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.964971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.965164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.965331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.965372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.965537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.965696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.965721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.965922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.966332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.966650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.966832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.967006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.967232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.967275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.967431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.967610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.967635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.967799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.967988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.968017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.968207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.968345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.968371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.968530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.968660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.968685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.968828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.969212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.969581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.739 [2024-04-24 16:17:23.969738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.739 qpair failed and we were unable to recover it. 00:21:22.739 [2024-04-24 16:17:23.969901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.970289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.970601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.970791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.970954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.971352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.971670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.971853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.972065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.972276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.972318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.972474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.972618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.972646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.972804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.973237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.973569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.973727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.973882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.974305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.974689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.974893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.975117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.975323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.975365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.975492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.975626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.975651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.975833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.975998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.976044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.976230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.976381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.976408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.976579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.976706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.976732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.976911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.977306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.977623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.977828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.978042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.978416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.978731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.978941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.979123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.979460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.979739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.979976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.980152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.980328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.980353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.980481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.980609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.980635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.980808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.981255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.981566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.981726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.981886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.982325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.982692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.982923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.983074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.983275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.983317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.983447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.983605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.983630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.983803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.984010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.740 [2024-04-24 16:17:23.984039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.740 qpair failed and we were unable to recover it. 00:21:22.740 [2024-04-24 16:17:23.984214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.984388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.984414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.741 qpair failed and we were unable to recover it. 00:21:22.741 [2024-04-24 16:17:23.984545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.984706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.984739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.741 qpair failed and we were unable to recover it. 00:21:22.741 [2024-04-24 16:17:23.984898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.741 qpair failed and we were unable to recover it. 00:21:22.741 [2024-04-24 16:17:23.985306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.741 qpair failed and we were unable to recover it. 00:21:22.741 [2024-04-24 16:17:23.985669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.741 [2024-04-24 16:17:23.985852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:22.741 qpair failed and we were unable to recover it. 00:21:22.741 [2024-04-24 16:17:23.986053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.014 [2024-04-24 16:17:23.986250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.014 [2024-04-24 16:17:23.986279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.014 qpair failed and we were unable to recover it. 00:21:23.014 [2024-04-24 16:17:23.986424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.014 [2024-04-24 16:17:23.986572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.014 [2024-04-24 16:17:23.986601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.986736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.986939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.987118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.987342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.987384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.987519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.987691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.987716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.987880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.988264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.988640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.988843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.988995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.989361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.989711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.989907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.990045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.990406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.990716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.990918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.991083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.991479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.991762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.991986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.992135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.992307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.992352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.992516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.992671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.992696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.992857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.993238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.993627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.993814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.994004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.994343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.994682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.994891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.995080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.995256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.995299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.995483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.995614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.995639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.995781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.995962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.996005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.996154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.996327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.996377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.996518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.996676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.996702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.015 qpair failed and we were unable to recover it. 00:21:23.015 [2024-04-24 16:17:23.996869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.015 [2024-04-24 16:17:23.997044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.997087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.997262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.997439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.997465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.997602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.997769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.997796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.997962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.998360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.998676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.998898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.999054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.999282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.999325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.999489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.999626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:23.999653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:23.999836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.000227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.000606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.000783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.000950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.001391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.001754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.001926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.002104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.002288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.002344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.002516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.002676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.002701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.002867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.003282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.003663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.003861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.004011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.004227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.004254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.004436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.004636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.004661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.004839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.005208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.005592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.005780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.005960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.006352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.006669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.006846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.006999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.007164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.007210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.016 [2024-04-24 16:17:24.007459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.007665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.016 [2024-04-24 16:17:24.007690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.016 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.007862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.008272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.008616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.008792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.008994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.009392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.009755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.009933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.010110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.010292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.010336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.010521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.010659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.010685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.010859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.011283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.011679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.011875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.012081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.012289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.012332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.012486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.012666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.012691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.012849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.013280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.013662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.013873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.014040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.014414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.014757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.014987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.015149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.015316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.015342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.015469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.015623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.015650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.015806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.015971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.016018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.016226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.016378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.016405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.016535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.016672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.016697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.016869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.017305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.017650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.017818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.018021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.018191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.018234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.017 qpair failed and we were unable to recover it. 00:21:23.017 [2024-04-24 16:17:24.018371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.018497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.017 [2024-04-24 16:17:24.018523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.018686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.018862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.018906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.019047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.019421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.019756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.019988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.020168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.020337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.020363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.020520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.020674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.020699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.020872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.021271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.021621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.021781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.021932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.022290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.022599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.022759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.022920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.023332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.023622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.023832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.024061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.024251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.024294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.024449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.024579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.024605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.024788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.024981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.025024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.025235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.025394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.025419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.025552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.025683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.025708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.025887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.026301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.026635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.026796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.026949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.027374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.027737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.027945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.028133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.028333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.028360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.028539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.028697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.028722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.018 qpair failed and we were unable to recover it. 00:21:23.018 [2024-04-24 16:17:24.028910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.018 [2024-04-24 16:17:24.029103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.029146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.029315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.029456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.029482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.029617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.029789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.029818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.029996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.030401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.030727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.030963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.031138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.031315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.031345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.031501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.031658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.031683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.032239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.032624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.032816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.033024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.033225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.033270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.033431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.033587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.033613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.033800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.033977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.034022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.034237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.034418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.034445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.034582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.034748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.034775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.034938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.035309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.035649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.035893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.036078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.036253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.036279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.019 qpair failed and we were unable to recover it. 00:21:23.019 [2024-04-24 16:17:24.036438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.036599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.019 [2024-04-24 16:17:24.036624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.036771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.036959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.037007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.037167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.037345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.037370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.037549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.037704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.037729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.037927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.038348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.038698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.038934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.039135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.039308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.039351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.039490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.039627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.039655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.039804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.039995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.040039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.040224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.040384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.040409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.040540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.040667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.040693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.040897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.041312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.041667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.041860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.042028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.042250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.042303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.042479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.042601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.042627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.042803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.043222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.043552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.043752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.043930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.044327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.044697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.044921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.045075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.045267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.045309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.045441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.045636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.045664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.045833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.046281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.046647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.046815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.046981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.047204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.047249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.020 qpair failed and we were unable to recover it. 00:21:23.020 [2024-04-24 16:17:24.047406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.020 [2024-04-24 16:17:24.047566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.047592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.047761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.047920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.047963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.048122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.048325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.048366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.048524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.048684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.048714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.048883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.049355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.049662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.049887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.050076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.050237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.050280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.050445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.050635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.050662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.050811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.050992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.051043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.051219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.051388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.051442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.051610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.051747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.051774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.051917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.052320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.052647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.052876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.053068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.053236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.053279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.053463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.053654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.053680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.053873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.054286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.054601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.054788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.054952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.055363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.055707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.055904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.056060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.056226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.056253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.056445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.056601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.056628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.056812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.056986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.057029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.057209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.057351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.057376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.057515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.057655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.057682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.057869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.058028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.058071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.021 qpair failed and we were unable to recover it. 00:21:23.021 [2024-04-24 16:17:24.058264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.021 [2024-04-24 16:17:24.058420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.058447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.058603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.058761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.058792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.058972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.059369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.059660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.059911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.060087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.060268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.060311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.060451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.060586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.060612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.060774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.060958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.061003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.061150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.061330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.061355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.061482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.061649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.061674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.061850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.062248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.062618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.062780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.062931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.063419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.063752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.063988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.064138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.064341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.064385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.064540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.064720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.064763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.064927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.065273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.065671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.065878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.066094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.066288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.066330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.066525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.066660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.066685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.066838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.067251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.067610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.067864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.068025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.068236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.068286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.068472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.068603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.068629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.068837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.069008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.022 [2024-04-24 16:17:24.069052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.022 qpair failed and we were unable to recover it. 00:21:23.022 [2024-04-24 16:17:24.069216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.069416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.069441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.069586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.069758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.069788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.069941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.070318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.070663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.070839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.070965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.071299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.071653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.071837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.072021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.072361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.072679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.072909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.073090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.073287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.073330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.073486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.073643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.073670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.073832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.074214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.074543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.074735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.074924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.075343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.075668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.075828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.075998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.076378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.076717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.076932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.077141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.077320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.077364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.077534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.077694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.077720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.077887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.078342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.078665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.078872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.079051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.079218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.023 [2024-04-24 16:17:24.079262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.023 qpair failed and we were unable to recover it. 00:21:23.023 [2024-04-24 16:17:24.079397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.079556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.079582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.079751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.079922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.079965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.080150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.080373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.080399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.080557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.080758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.080786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.080951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.081390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.081721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.081946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.082090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.082310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.082355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.082503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.082658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.082683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.082831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.083242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.083599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.083754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.083910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.084365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.084727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.084965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.085183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.085357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.085385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.085530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.085713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.085748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.085899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.086270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.086633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.086833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.086996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.087333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.087676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.087899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.088038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.088223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.088266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.024 qpair failed and we were unable to recover it. 00:21:23.024 [2024-04-24 16:17:24.088425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.088589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.024 [2024-04-24 16:17:24.088618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.088804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.088978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.089025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.089179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.089381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.089407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.089570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.089756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.089783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.089962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.090383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.090769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.090974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.091133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.091330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.091373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.091492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.091643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.091668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.091824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.092246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.092555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.092798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.092952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.093371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.093691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.093924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.094138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.094306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.094355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.094486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.094666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.094693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.094853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.095272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.095606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.095868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.096032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.096237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.096281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.096449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.096594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.096619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.096805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.097231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.097600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.097779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.097955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.098409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.098739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.098946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.099108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.099308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.099352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.025 qpair failed and we were unable to recover it. 00:21:23.025 [2024-04-24 16:17:24.099490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.025 [2024-04-24 16:17:24.099652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.099678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.099845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.100254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.100593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.100788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.100964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.101340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.101651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.101801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.101964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.102382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.102714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.102918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.103077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.103271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.103314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.103470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.103600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.103625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.103776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.103969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.104014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.104225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.104376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.104403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.104538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.104702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.104728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.104941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.105260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.105581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.105779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.105935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.106311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.106712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.106921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.107110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.107307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.107349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.107558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.107737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.107771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.107954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.108341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.108767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.108913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.109097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.109329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.109372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.109561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.109708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.109734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.109917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.110099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.110141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.026 qpair failed and we were unable to recover it. 00:21:23.026 [2024-04-24 16:17:24.110291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.026 [2024-04-24 16:17:24.110443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.110468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.110601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.110754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.110780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.110977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.111345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.111672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.111899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.112057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.112250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.112292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.112450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.112614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.112641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.112822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.113245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.113644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.113870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.114046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.114426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.114772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.114997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.115155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.115326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.115357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.115518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.115648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.115674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.115833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.116243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.116602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.116786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.116937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.117323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.117699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.117892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.118053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.118419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.118709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.118932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.119074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.119265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.119291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.119474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.119621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.119646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.119809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.120214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.120511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.027 [2024-04-24 16:17:24.120694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.027 qpair failed and we were unable to recover it. 00:21:23.027 [2024-04-24 16:17:24.120856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.121277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.121635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.121886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.122037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.122241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.122284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.122445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.122601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.122632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.122818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.122979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.123005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.123166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.123290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.123317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.123482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.123623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.123656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.123835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.124282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.124620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.124826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.125027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.125343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.125695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.125932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.126107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.126310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.126353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.126513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.126666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.126692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.126851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.127214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.127595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.127760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.127944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.128344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.128680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.128905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.129088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.129261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.129290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.129441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.129631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.129657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.129860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.130063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.028 [2024-04-24 16:17:24.130113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.028 qpair failed and we were unable to recover it. 00:21:23.028 [2024-04-24 16:17:24.130298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.130480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.130507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.130665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.130850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.130880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.131080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.131304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.131347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.131506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.131658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.131684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.131871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.132260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.132651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.132852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.133014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.133471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.133771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.133989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.134176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.134335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.134376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.134541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.134678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.134703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.134905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.135280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.135639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.135847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.136029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.136224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.136271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3479097 Killed "${NVMF_APP[@]}" "$@" 00:21:23.029 [2024-04-24 16:17:24.136405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.136574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.136600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 16:17:24 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:21:23.029 [2024-04-24 16:17:24.136771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 16:17:24 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:23.029 [2024-04-24 16:17:24.136930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.136975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 16:17:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:23.029 16:17:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.029 [2024-04-24 16:17:24.137159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.029 [2024-04-24 16:17:24.137391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.137434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.137601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.137799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.137829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.138042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.138210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.138260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.138445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.138609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.138635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.138824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.139234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.139627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.139789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.029 qpair failed and we were unable to recover it. 00:21:23.029 [2024-04-24 16:17:24.139940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.029 [2024-04-24 16:17:24.140146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.140190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.140349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.140509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.140534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.140693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.140857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.140900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.141118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.141310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.141489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.141619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.141646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.141872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 16:17:24 -- nvmf/common.sh@470 -- # nvmfpid=3479633 00:21:23.030 [2024-04-24 16:17:24.142057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.142110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 16:17:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:23.030 16:17:24 -- nvmf/common.sh@471 -- # waitforlisten 3479633 00:21:23.030 [2024-04-24 16:17:24.142295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 16:17:24 -- common/autotest_common.sh@817 -- # '[' -z 3479633 ']' 00:21:23.030 [2024-04-24 16:17:24.142450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.142480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 16:17:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.030 [2024-04-24 16:17:24.142642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 16:17:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.030 [2024-04-24 16:17:24.142839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.142884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 16:17:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.030 [2024-04-24 16:17:24.143066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 16:17:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.030 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.030 [2024-04-24 16:17:24.143235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.143279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.143462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.143619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.143645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.143847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.144261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.144612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.144824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.145031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.145381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.145731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.145954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.146131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.146323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.146373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.146554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.146705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.146732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.146899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.147123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.147170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.147365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.147546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.147573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.147756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.147957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.148001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.148155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.148353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.148382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.148565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.148768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.148795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.148970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.149173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.149218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.030 [2024-04-24 16:17:24.149380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.149527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.030 [2024-04-24 16:17:24.149554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.030 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.149716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.149937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.149981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.150155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.150373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.150416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.150578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.150725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.150758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.150918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.151370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.151704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.151903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.152075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.152272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.152319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.152523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.152705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.152731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.152893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.153330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.153730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.153927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.154081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.154233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.154284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.154493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.154649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.154674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.154836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.155322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.155660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.155869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.156037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.156263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.156310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.156510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.156668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.156693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.156870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.157299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.157670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.157885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.158070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.158245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.158288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.158448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.158611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.158636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.158808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.159268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.159642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.159878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.160022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.160408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.031 qpair failed and we were unable to recover it. 00:21:23.031 [2024-04-24 16:17:24.160769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.031 [2024-04-24 16:17:24.160997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.161221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.161374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.161400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.161551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.161678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.161705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.161899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.162225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.162587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.162776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.162955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.163161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.163210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.163368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.163570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.163595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.163762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.163991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.164028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.164239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.164499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.164543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.164701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.164866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.164909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.165092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.165258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.165287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.165492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.165630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.165657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.165823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.166237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.166617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.166780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.166939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.167342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.167688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.167939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.168096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.168279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.168321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.168484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.168642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.168668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.168831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.169243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.169595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.169825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.169985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.170347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.170627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.170833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.171005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.171232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.171275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.171430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.171565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.032 [2024-04-24 16:17:24.171590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.032 qpair failed and we were unable to recover it. 00:21:23.032 [2024-04-24 16:17:24.171751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.171916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.171961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.172161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.172327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.172354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.172509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.172630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.172656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.172811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.173265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.173607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.173828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.173998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.174286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.174330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.174497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.174657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.174682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.175297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.175614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.175779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.175943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.176172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.176214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.176402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.176585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.176611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.176754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.177210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.177586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.177779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.177966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.178388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.178735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.178969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.179138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.179315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.179358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.179598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.179758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.179785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.179943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.180383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.180767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.180980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.033 qpair failed and we were unable to recover it. 00:21:23.033 [2024-04-24 16:17:24.181142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.033 [2024-04-24 16:17:24.181321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.181364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.181522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.181683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.181708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.181897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.182264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.182685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.182886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.183047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.183273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.183315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.183474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.183656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.183686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.183857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.184202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.184530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.184795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.184971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.185397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.185752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.185991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.186134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.186336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.186380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.186518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.186667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.186692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.186879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.187269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.187640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.187867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.188025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.188205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.188249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.188407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.188648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.188676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.188874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.189309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.189611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.189817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.189975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.190351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.190718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.190948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.191112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.191290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.191339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.191466] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:21:23.034 [2024-04-24 16:17:24.191530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.191541] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.034 [2024-04-24 16:17:24.191691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.191715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.034 qpair failed and we were unable to recover it. 00:21:23.034 [2024-04-24 16:17:24.191886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.034 [2024-04-24 16:17:24.192069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.192111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.192273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.192596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.192652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.192868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.193296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.193693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.193987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.194149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.194422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.194475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.194619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.194758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.194785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.194961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.195320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.195371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.195559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.195712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.195738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.195930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.196092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.196161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.196359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.196567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.196610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.196762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.196960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.197003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.197195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.197560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.197614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.197789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.197979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.198023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.198230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.198494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.198551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.198737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.198879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.198905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.199179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.199505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.199566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.199730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.199893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.199919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.200101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.200291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.200349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.200565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.200723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.200761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.200948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.201317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.201345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.201552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.201712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.201739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.201961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.202166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.202210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.202374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.202554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.202579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.202843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.203267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.203641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.203854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.203997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.204154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.204180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.204393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.204541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.204567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.035 qpair failed and we were unable to recover it. 00:21:23.035 [2024-04-24 16:17:24.204699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.035 [2024-04-24 16:17:24.204918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.204961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.205151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.205353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.205379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.205535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.205663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.205688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.205859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.206207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.206509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.206668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.206835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.207159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.207493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.207698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.207870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.208185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.208543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.208728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.208882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.209245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.209525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.209849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.209999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.210199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.210354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.210379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.210536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.210717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.210760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.210917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.211244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.211609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.211829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.211990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.212351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.212753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.212914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.213070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.213432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.213736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.213904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.214095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.214301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.214345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.036 [2024-04-24 16:17:24.214543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.214700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.036 [2024-04-24 16:17:24.214733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.036 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.214914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.215274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.215648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.215844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.216001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.216463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.216828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.216989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.217149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.217364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.217429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.217593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.217840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.217866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.218001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.218429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.218804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.218972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.219171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.219355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.219381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.219544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.219780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.219807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.219958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.220339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.220703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.220922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.221051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.221433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.221808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.221971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.222135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.222470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.222815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.222970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.223120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.223333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.223377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.223548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.223714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.223757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.223893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.224291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.224584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.224764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.037 qpair failed and we were unable to recover it. 00:21:23.037 [2024-04-24 16:17:24.224920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.225143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.037 [2024-04-24 16:17:24.225197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.225362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.225500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.225527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.225708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.225897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.225923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.226059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.226309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.226335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.226471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.226708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.226734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.226888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.227211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.227596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.227840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.227981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.038 [2024-04-24 16:17:24.228470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.228790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.228952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.229205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.229423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.229466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.229647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.229831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.229857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.230018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.230374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.230691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.230869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.231047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.231492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.231827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.231994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.232162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.232322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.232349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.232508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.232664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.232692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.232857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.232999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.233184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.233465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.233805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.233952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.038 [2024-04-24 16:17:24.234089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.234225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.038 [2024-04-24 16:17:24.234249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.038 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.234371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.234508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.234534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.234673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.234905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.234931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.235095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.235220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.235245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.235447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.235604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.235629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.235815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.235977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.236146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.236458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.236825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.236995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.237177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.237486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.237813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.237977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.238003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.238174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.238364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.238389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.238550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.238784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.238811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.238946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.239274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.239649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.239797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.239958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.240275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.240614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.240780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.240964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.241295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.241627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.241793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.242020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.242343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.242727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.242899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.243036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.243388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.243706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.243891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.244019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.244207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.244232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.244422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.244580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.244605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.039 qpair failed and we were unable to recover it. 00:21:23.039 [2024-04-24 16:17:24.244768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.039 [2024-04-24 16:17:24.244918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.244944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.245084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.245397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.245739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.245909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.246045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.246416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.246701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.246869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.247021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.247383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.247727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.247905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.248061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.248379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.248774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.248986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.249151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.249482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.249837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.249995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.250228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.250463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.250487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.250630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.251658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.251704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.251883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.252278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.252657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.252826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.252967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.253276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.253674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.253874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.254105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.254255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.254280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.254475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.254635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.254660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.254825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.254984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.255009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.255162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.255341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.255366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.255533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.255708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.255734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.255883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.256043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.256069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.256209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.256368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.040 [2024-04-24 16:17:24.256393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.040 qpair failed and we were unable to recover it. 00:21:23.040 [2024-04-24 16:17:24.256578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.256708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.256733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.256878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.257190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.257558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.257780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.258011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.258477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.258839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.258980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.259139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.259320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.259345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.259502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.259671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.259696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.259835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.259992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.260019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.260145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.260282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.260316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.260538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.260696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.260721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.260861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.261163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.261473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.261823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.261984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.262153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.262469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.262798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.262976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.263115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.263399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.263667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.263856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.263893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.041 [2024-04-24 16:17:24.264020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.264364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.264649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.264857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.264994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.265220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.265245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.265471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.265635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.265661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.265812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.265980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.266005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.266198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.266363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.266388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.266547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.266675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.266699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.266854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.267006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.267031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.041 qpair failed and we were unable to recover it. 00:21:23.041 [2024-04-24 16:17:24.267231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.267423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.041 [2024-04-24 16:17:24.267448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.267632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.267824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.267850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.267982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.268384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.268729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.268932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.269120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.269431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.269764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.269908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.270088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.270369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.270661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.270872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.271039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.271349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.271670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.271834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.271955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.272117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.272142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.272271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.272559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.272584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.272810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.272974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.273001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.273159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.273320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.273345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.273527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.273689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.273715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.273850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.274194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.274532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.274717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.274893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.275273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.275756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.275944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.276095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.276266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.276293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.276471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.276614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.276640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.276836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.277170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.042 [2024-04-24 16:17:24.277604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.042 [2024-04-24 16:17:24.277827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.042 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.278018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.278168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.278194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.278322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.278561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.278587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.278753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.278990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.279016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.279224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.279369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.279394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.279547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.279738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.279773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.279916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.280256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.280600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.280795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.280963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.281360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.281800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.281963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.282103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.282391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.282812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.282953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.283113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.283468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.283800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.283967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.284114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.284346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.284371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.284542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.284696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.284722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.043 [2024-04-24 16:17:24.284862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.284975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.043 [2024-04-24 16:17:24.285001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.043 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.285168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.285300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.285329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.285499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.285686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.285712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.285955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.286262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.286628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.286839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.286997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.287330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.287649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.287913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.288047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.288402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.288684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.288876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.289040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.289307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.289605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.289774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.289932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.290288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.290635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.290789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.290919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.291053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.324 [2024-04-24 16:17:24.291079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.324 qpair failed and we were unable to recover it. 00:21:23.324 [2024-04-24 16:17:24.291235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.291391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.291416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.291651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.291786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.291813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.291979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.292379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.292723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.292907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.293067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.293253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.293290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.293422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.293653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.293680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.293843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.293997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.294023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.294181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.294351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.294377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.294536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.294692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.294719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.294958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.295343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.295705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.295911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.296043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.296353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.296675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.296855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.297011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.297364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.297718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.297949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.298084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.298450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.298785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.298938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.299080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.299396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.299758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.299949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.300107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.300269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.300294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.325 qpair failed and we were unable to recover it. 00:21:23.325 [2024-04-24 16:17:24.300443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.325 [2024-04-24 16:17:24.300578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.300605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.300825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.300986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.301013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.301170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.301333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.301359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.301482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.301707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.301739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.301920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.302264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.302522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.302717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.302916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.303234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.303559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.303772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.303900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.304258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.304561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.304799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.304932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.305229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.305528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.305680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.305851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.306221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.306607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.306819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.306982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.307333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.307645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.307822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.307981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.308307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.308697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.308894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.309026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.309177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.309203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.326 qpair failed and we were unable to recover it. 00:21:23.326 [2024-04-24 16:17:24.309361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.326 [2024-04-24 16:17:24.309522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.309558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.309775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.309932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.309958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.310099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.310284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.310319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.310452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.310636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.310662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.310846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.311167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.311537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.311716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.311875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.312190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.312510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.312757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.312918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.313243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.313580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.313797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.313966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.314310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.314675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.314869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.314996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.315339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.315603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.315759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.315953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.316271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.316566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.316768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.316902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.317055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.317081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.327 qpair failed and we were unable to recover it. 00:21:23.327 [2024-04-24 16:17:24.317271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.317421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.327 [2024-04-24 16:17:24.317446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.317604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.317772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.317798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.317953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.318337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.318702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.318869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.319057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.319400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.319729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.319921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.320039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.320407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.320757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.320937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.321097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.321408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.321702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.321905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.322062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.322399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.322696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.322895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.323056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.323371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.323686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.323865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.324023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.324390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.324732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.324898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.325083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.325201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.328 [2024-04-24 16:17:24.325226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.328 qpair failed and we were unable to recover it. 00:21:23.328 [2024-04-24 16:17:24.325394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.325560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.325586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.325766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.325896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.325921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.326066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.326409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.326721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.326894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.327032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.327414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.327809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.327994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.328123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.328308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.328333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.328465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.328616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.328641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.328816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.328983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.329008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.329162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.329308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.329334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.329498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.329650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.329676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.329844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.330185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.330500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.330863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.330999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.331025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.331211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.331357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.331382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.331526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.331717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.331750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.331891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.332237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.332547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.332699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.329 qpair failed and we were unable to recover it. 00:21:23.329 [2024-04-24 16:17:24.332879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.333003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.329 [2024-04-24 16:17:24.333028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.333227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.333368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.333394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.333531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.333715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.333754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.333887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.334214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.334560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.334750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.334887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.335250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.335608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.335772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.335892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.336229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.336538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.336714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.336898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.337216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.337544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.337728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.337908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.338202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.338532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.338689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.338840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.339182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.339482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.339800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.339975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.340108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.340270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.340296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.340478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.340660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.340686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.340834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.341174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.341487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.341843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.341998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.342170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.342328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.342353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.330 qpair failed and we were unable to recover it. 00:21:23.330 [2024-04-24 16:17:24.342504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.330 [2024-04-24 16:17:24.342632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.342657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.342827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.342967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.342992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.343106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.343422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.343720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.343909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.344037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.344363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.344727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.344894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.345027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.345407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.345752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.345908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.346043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.346326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.346641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.346827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.346983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.347308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.347620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.347779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.347902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.348185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.348465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.348789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.348971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.349118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.349455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.349752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.349932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.350097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.350441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.350753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.350939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.351099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.351224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.351250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.351410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.351591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.331 [2024-04-24 16:17:24.351617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.331 qpair failed and we were unable to recover it. 00:21:23.331 [2024-04-24 16:17:24.351724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.351875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.351900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.352056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.352384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.352679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.352878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.353065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.353358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.353698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.353889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.354049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.354334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.354662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.354840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.354994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.355327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.355660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.355851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.356002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.356309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.356677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.356840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.356999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.357293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.357662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.357836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.357957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.358284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.358620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.358776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.358989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.359305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.359620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.359782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.359945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.360261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.360694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.360852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.332 qpair failed and we were unable to recover it. 00:21:23.332 [2024-04-24 16:17:24.360982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.361146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.332 [2024-04-24 16:17:24.361172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.361331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.361481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.361507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.361622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.361801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.361833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.361962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.362306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.362645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.362802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.362965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.363256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.363530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.363719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.363905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.364254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.364569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.364760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.364946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.365268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.365637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.365826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.365996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.366319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.366708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.366869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.367030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.367358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.367685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.367859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.368043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.368420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.368734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.368928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.369089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.369468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.369792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.369940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.370067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.370182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.370208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.370366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.370528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.370554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.333 qpair failed and we were unable to recover it. 00:21:23.333 [2024-04-24 16:17:24.370685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.333 [2024-04-24 16:17:24.370837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.370864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.371020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.371358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.371658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.371842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.371971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.372311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.372680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.372833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.372962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.373276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.373588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.373775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.373907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.374220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.374528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.374737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.374917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.375254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.375527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.375696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.375889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.376215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.376568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.376758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.376915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.377245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.377556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.377748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.377932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.378264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.378553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.334 [2024-04-24 16:17:24.378707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.334 qpair failed and we were unable to recover it. 00:21:23.334 [2024-04-24 16:17:24.378836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.378970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.378996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.379128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379126] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.335 [2024-04-24 16:17:24.379161] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.335 [2024-04-24 16:17:24.379176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.335 [2024-04-24 16:17:24.379189] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.335 [2024-04-24 16:17:24.379200] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.335 [2024-04-24 16:17:24.379256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.379260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:23.335 [2024-04-24 16:17:24.379315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.335 [2024-04-24 16:17:24.379409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:23.335 [2024-04-24 16:17:24.379351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.335 [2024-04-24 16:17:24.379544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.379699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.379857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.379989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.380293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.380616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.380787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.380924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.381200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.381496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.381838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.381999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.382175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.382460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.382772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.382918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.383101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.383425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.383735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.383899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.384053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.384350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.384666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.384814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.384969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.385258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.385557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.385842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.385997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.386166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.386466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.335 qpair failed and we were unable to recover it. 00:21:23.335 [2024-04-24 16:17:24.386738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.335 [2024-04-24 16:17:24.386863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.386893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.387026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.387306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.387639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.387786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.387953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.388285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.388593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.388815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.388977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.389324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.389629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.389791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.389952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.390267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.390567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.390730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.390871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.391199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.391504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.391822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.391978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.392161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.392432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.392704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.392898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.393017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.393272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.393585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.393764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.393903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.394208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.394501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.394799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.394982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.395117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.395236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.395261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.395374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.395504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.395528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.336 qpair failed and we were unable to recover it. 00:21:23.336 [2024-04-24 16:17:24.395695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.336 [2024-04-24 16:17:24.395853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.395879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.396041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.396320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.396626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.396819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.396942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.397220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.397621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.397787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.397919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.398220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.398513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.398789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.398941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.399081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.399409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.399738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.399905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.400029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.400313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.400645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.400832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.400972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.401272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.401571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.401799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.401948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.402256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.402577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.402862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.402993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.403019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.403190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.403351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.403377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.403547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.403715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.403739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.403897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.404203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.404505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.337 [2024-04-24 16:17:24.404657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.337 qpair failed and we were unable to recover it. 00:21:23.337 [2024-04-24 16:17:24.404798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.404940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.404970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.405112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.405422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.405705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.405869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.406030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.406308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.406588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.406782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.406917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.407229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.407536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.407730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.407870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.408169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.408464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.408732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.408940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.409075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.409399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.409707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.409866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.409991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.410283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.410607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.410763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.410906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.411208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.411488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.411807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.411973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.412005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.412146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.412306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.412332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.338 qpair failed and we were unable to recover it. 00:21:23.338 [2024-04-24 16:17:24.412468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.338 [2024-04-24 16:17:24.412602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.412628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.412755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.412896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.412923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.413064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.413339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.413621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.413779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.413936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.414229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.414585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.414783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.414934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.415221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.415495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.415773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.415932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.416071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.416386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.416684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.416841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.416966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.417288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.417581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.417773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.417909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.418231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.418498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.418848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.418981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.419141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.419436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.419723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.419894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.420035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.420297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.420639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.420810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.420932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.421062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.339 [2024-04-24 16:17:24.421087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.339 qpair failed and we were unable to recover it. 00:21:23.339 [2024-04-24 16:17:24.421225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.421516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.421818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.421982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.422096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.422403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.422675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.422819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.422955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.423243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.423597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.423775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.423897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.424198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.424511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.424795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.424957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.425088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.425379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.425662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.425841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.425973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.426269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.426558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.426722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.426845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.427180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.427465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.427728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.427899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.428043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.428331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.428598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.428793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.428927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.429087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.429112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.429273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.429395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.429419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.340 qpair failed and we were unable to recover it. 00:21:23.340 [2024-04-24 16:17:24.429568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.340 [2024-04-24 16:17:24.429680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.429705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.429874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.430158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.430434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.430739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.430938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.431066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.431416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.431726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.431901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.432036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.432303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.432591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.432776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.432926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.433222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.433529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.433720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.433867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.434163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.434487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.434800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.434982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.435137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.435427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.435762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.435930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.436066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.436366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.436645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.436822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.436966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.437280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.437567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.437755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.437893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.438059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.438086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.438228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.438339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.438365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.341 qpair failed and we were unable to recover it. 00:21:23.341 [2024-04-24 16:17:24.438499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.341 [2024-04-24 16:17:24.438611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.438635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.438769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.438903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.438929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.439072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.439364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.439658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.439821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.439950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.440267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.440585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.440734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.440882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.441163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.441450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.441755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.441905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.442051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.442370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.442692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.442860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.442989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.443308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.443592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.443757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.443894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.444191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.444485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.444798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.444977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.445097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.445404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.445712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.445909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.446068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.446390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.342 qpair failed and we were unable to recover it. 00:21:23.342 [2024-04-24 16:17:24.446678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.342 [2024-04-24 16:17:24.446856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.447006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.447324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.447620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.447779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.447916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.448229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.448528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.448697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.448860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.449169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.449463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.449779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.449944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.450088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.450364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.450668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.450826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.451258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.451568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.451763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.451927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.452225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.452537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.452823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.452985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.453117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.453413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.453701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.343 [2024-04-24 16:17:24.453871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.343 qpair failed and we were unable to recover it. 00:21:23.343 [2024-04-24 16:17:24.454003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.454282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.454586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.454724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.454880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.455153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.455445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.455817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.455973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.456139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.456451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.456750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.456901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.457069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.457362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.457648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.457828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.457957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.458233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.458529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.458716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.458879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.459219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.459492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.459798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.459966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.460112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.460421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.460710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.460868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.461004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.461276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.461581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.461773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.344 [2024-04-24 16:17:24.461916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.462037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-04-24 16:17:24.462062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.344 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.462214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.462506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.462817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.462973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.463125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.463460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.463796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.463961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.464110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.464434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.464761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.464930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.465064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.465384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.465698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.465845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.466009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.466351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.466642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.466808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.466950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.467252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.467579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.467763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.467933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.468229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.468542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.468838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.468981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.469130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.469408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.469727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.469893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.470017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.470322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.470628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.470812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.345 qpair failed and we were unable to recover it. 00:21:23.345 [2024-04-24 16:17:24.470942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.471072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.345 [2024-04-24 16:17:24.471097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.471224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.471361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.471386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.471513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.471669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.471694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.471838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.471976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.472134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.472436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.472709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.472860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.473007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.473292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.473585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.473752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.473911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.474209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.474571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.474728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.474904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.475187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.475478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.475770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.475946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.476073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.476349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.476674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.476828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.476955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.477245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.477510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.477801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.477963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.478100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.478420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.478731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.478918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.346 qpair failed and we were unable to recover it. 00:21:23.346 [2024-04-24 16:17:24.479049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.346 [2024-04-24 16:17:24.479171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.479200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.479339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.479473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.479500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.479640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.479776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.479801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.479940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.480225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.480540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.480831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.480993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.481123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.481433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.481755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.481929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.482043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.482350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.482666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.482815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.482930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.483214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.483503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.483800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.483952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.484080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.484409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.484731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.484944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.485100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.485413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.485709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.485895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.486017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.486337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.486637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.486803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.347 qpair failed and we were unable to recover it. 00:21:23.347 [2024-04-24 16:17:24.486926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.487063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.347 [2024-04-24 16:17:24.487088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.487214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.487529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.487837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.487995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.488150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.488475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.488792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.488945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.489082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.489392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.489706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.489897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.490034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.490317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.490646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.490796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.490911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.491183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.491501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.491827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.491974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.492109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.492423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.492774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.492927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.493051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.493371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.493644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.493821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.493958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.494095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.494121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.494261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.494418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.494444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.348 qpair failed and we were unable to recover it. 00:21:23.348 [2024-04-24 16:17:24.494607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.348 [2024-04-24 16:17:24.494736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.494768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.494904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.495211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.495489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.495996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.496149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.496437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.496775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.496937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.497095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.497391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.497739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.497910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.498030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.498314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.498605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.498758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.498920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.499208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.499485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.499786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.499949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.500066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.500382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.500657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.500847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.500964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.501241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.501543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.501822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.501983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.502134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.502407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.502685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.502853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.502976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.503143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.503167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.349 qpair failed and we were unable to recover it. 00:21:23.349 [2024-04-24 16:17:24.503321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.349 [2024-04-24 16:17:24.503440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.503466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.503600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.503754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.503779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.503915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.504237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.504525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.504981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.505132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.505406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.505674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.505850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.505979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.506319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.506610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.506768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.506890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.507171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.507492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.507782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.507939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.508101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.508380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.508637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.508834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.508965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.509279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.509575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.509801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.509929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.510201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.510525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.510825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.510987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.511115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.511472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.511752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.350 [2024-04-24 16:17:24.511918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.350 qpair failed and we were unable to recover it. 00:21:23.350 [2024-04-24 16:17:24.512059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.512331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.512633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.512824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.512952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.513256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.513538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.513855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.513975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.514127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.514469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.514758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.514909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.515030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.515348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.515611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.515773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.515903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.516197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.516511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.516815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.516976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.517154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.517467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.517748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.517917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.518047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.518339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.518620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.518784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.518919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.519225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.519488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.519791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.519942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.520075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.520206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.520238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.520379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.520495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.351 [2024-04-24 16:17:24.520521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.351 qpair failed and we were unable to recover it. 00:21:23.351 [2024-04-24 16:17:24.520649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.520785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.520811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.520969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.521254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.521549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.521863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.521999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.522154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.522495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.522794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.522942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.523067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.523377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.523641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.523794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.523924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.524215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.524505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.524812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.524973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.525098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.525403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.525681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.525840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.525970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.526258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.526528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.526824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.526979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.527105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.527401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.352 qpair failed and we were unable to recover it. 00:21:23.352 [2024-04-24 16:17:24.527715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.352 [2024-04-24 16:17:24.527869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.527999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.528315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.528585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.528875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.528996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.529152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.529481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.529781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.529901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 16:17:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.353 [2024-04-24 16:17:24.529926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.530064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.530184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 16:17:24 -- common/autotest_common.sh@850 -- # return 0 00:21:23.353 [2024-04-24 16:17:24.530209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.530337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.530480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.530505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 16:17:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.353 [2024-04-24 16:17:24.530640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.530767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.530793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 16:17:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.353 [2024-04-24 16:17:24.530920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 [2024-04-24 16:17:24.531192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.531467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.531806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.531950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.532081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.532381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.532684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.532827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.532952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.533285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.533572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.533751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.533893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.534205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.534534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.534823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.534974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.535118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.535412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.535689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.535834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.353 qpair failed and we were unable to recover it. 00:21:23.353 [2024-04-24 16:17:24.535970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.353 [2024-04-24 16:17:24.536112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.536278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.536572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.536860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.536990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.537148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.537452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.537781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.537927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.538081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.538378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.538659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.538827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.538955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.539257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.539542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.539727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.539889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.540155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.540492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.540803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.540964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.541115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.541415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.541762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.541926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.542060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.542357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.542687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.542835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.542971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.543246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.543542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.543850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.543978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.544134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.544407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.544698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.544885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.545026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.545224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.354 [2024-04-24 16:17:24.545250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.354 qpair failed and we were unable to recover it. 00:21:23.354 [2024-04-24 16:17:24.545398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.545533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.545558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.545689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.545835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.545861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.546018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.546298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.546654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.546820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.546960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.547716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.547756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.547904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.548251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.548583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.548763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.548901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.549200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.549800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.549956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.550122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.550388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.550736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.550911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.551034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.551331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.551638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.551798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.551954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.552290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.552603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.552772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.552911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.553190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.553515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 16:17:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.355 [2024-04-24 16:17:24.553642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.553806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 16:17:24 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.355 [2024-04-24 16:17:24.553947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.553975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.355 [2024-04-24 16:17:24.554135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.554259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.554285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.355 [2024-04-24 16:17:24.554447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.554634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.554659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.554818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.554976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.555002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.555129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.555289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.355 [2024-04-24 16:17:24.555320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.355 qpair failed and we were unable to recover it. 00:21:23.355 [2024-04-24 16:17:24.555452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.555587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.555612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.555739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.555890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.555916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.556096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.556384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.556680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.556837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.556999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.557279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.557557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.557756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.557887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.558200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.558502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.558781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.558932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.559093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.559426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.559713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.559895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.560018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.560331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.560664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.560881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.561041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.561394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.561671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.561847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.561988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.562323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.562597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.562813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.562941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.563265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.563533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.563864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.563999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.356 [2024-04-24 16:17:24.564027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.356 qpair failed and we were unable to recover it. 00:21:23.356 [2024-04-24 16:17:24.564166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.564498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.564820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.564969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.565110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.565398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.565752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.565925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.566061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.566332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.566605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.566754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.566902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.567240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.567566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.567712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.567856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.568237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.568550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.568706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.568872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.569169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.569512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.569794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.569938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.570063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.570377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.570667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.570857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.570994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.571339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.571656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.571828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.571966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.572260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.572531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.572681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.572847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.573168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.573496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.573812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.357 [2024-04-24 16:17:24.573989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.357 qpair failed and we were unable to recover it. 00:21:23.357 [2024-04-24 16:17:24.574120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.574436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.574752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.574912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.575055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.575331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.575671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.575818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.575994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.576326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.576660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.576817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.577027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.577345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.577643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.577793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.577956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.578242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.578519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.578822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.578982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.579150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.579459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.579782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.579937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.580067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.580359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.580667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.580862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.581003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.581324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.581597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.581782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.581910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 Malloc0 00:21:23.358 [2024-04-24 16:17:24.582042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.582072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.582205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.582318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.358 [2024-04-24 16:17:24.582343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.582460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 16:17:24 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:23.358 [2024-04-24 16:17:24.582576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.582602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.358 [2024-04-24 16:17:24.582761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.582888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.582914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.583044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.583344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.583622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.583780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.358 [2024-04-24 16:17:24.583924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.584044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.358 [2024-04-24 16:17:24.584069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.358 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.584257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.584395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.584420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.584575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.584694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.584719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.584862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.585151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.585466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585572] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.359 [2024-04-24 16:17:24.585616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.585785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.585931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.586063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.586335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.586613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.586802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.586937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.587228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.587528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.587840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.587997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.588124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.588258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.588283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.359 [2024-04-24 16:17:24.588433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.588592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.359 [2024-04-24 16:17:24.588617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.359 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.588748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.588893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.588919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.589077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.589408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.589702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.589868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.589996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.590285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.590584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.590759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.590884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.591201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.591516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.591805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.591955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.592087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.592412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.592712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.592901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.593027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.593315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 [2024-04-24 16:17:24.593618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 [2024-04-24 16:17:24.593786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.625 qpair failed and we were unable to recover it. 00:21:23.625 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.625 [2024-04-24 16:17:24.593921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.625 16:17:24 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.626 [2024-04-24 16:17:24.594073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b9 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.626 0 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.626 [2024-04-24 16:17:24.594227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.594502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.594780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.594952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.595108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.595445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.595770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.595933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.596056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.596625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.596776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.596931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.597202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.597487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.597762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.597919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.598046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.598351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.598641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.598791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.598948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.599256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.599807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.599979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.600148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.600488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.600805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.600968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.601125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.601279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.626 [2024-04-24 16:17:24.601304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.626 qpair failed and we were unable to recover it. 00:21:23.626 [2024-04-24 16:17:24.601423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.601538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.601563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.601713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.601854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.627 [2024-04-24 16:17:24.601880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.602005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 16:17:24 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.627 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.627 [2024-04-24 16:17:24.602131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.602156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.627 [2024-04-24 16:17:24.602283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.602448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.602474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.602596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.602717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.602765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.602907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.603207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.603503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.603812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.603971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.604091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.604374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.604664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.604823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.604946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.605281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.605611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.605802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.605949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.606272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.606573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.606724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.606870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.607177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.607454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.607783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.607969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.608133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.608265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.608291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.627 qpair failed and we were unable to recover it. 00:21:23.627 [2024-04-24 16:17:24.608436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.627 [2024-04-24 16:17:24.608560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.608585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.608749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.608905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.608931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.609094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.609370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.609649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.609799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.628 [2024-04-24 16:17:24.609951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 16:17:24 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.628 [2024-04-24 16:17:24.610107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.610132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.628 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.628 [2024-04-24 16:17:24.610284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.610402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.610427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.610561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.610695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.610722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.610870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.611163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.611462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.611795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.611933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.612059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.612399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.612690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.612844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.612990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.613263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.613560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.628 [2024-04-24 16:17:24.613750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 [2024-04-24 16:17:24.613854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.628 [2024-04-24 16:17:24.616340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.628 [2024-04-24 16:17:24.616506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.628 [2024-04-24 16:17:24.616534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.628 [2024-04-24 16:17:24.616550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.628 [2024-04-24 16:17:24.616563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.628 [2024-04-24 16:17:24.616597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.628 qpair failed and we were unable to recover it. 00:21:23.628 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.628 16:17:24 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.628 16:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.628 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:23.628 16:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.628 16:17:24 -- host/target_disconnect.sh@58 -- # wait 3479220 00:21:23.628 [2024-04-24 16:17:24.626197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.628 [2024-04-24 16:17:24.626335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.628 [2024-04-24 16:17:24.626362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.628 [2024-04-24 16:17:24.626377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.626390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.626419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.636193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.636316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.636350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.636366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.636378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.636408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.646229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.646373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.646399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.646414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.646427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.646457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.656189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.656321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.656348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.656363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.656376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.656405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.666178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.666309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.666336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.666351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.666364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.666393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.676195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.676322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.676348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.676364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.676376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.676423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.686238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.686402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.686428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.686451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.686472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.686503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.696227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.696365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.696391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.696407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.696419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.696449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.706282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.706415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.706442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.706458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.706470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.706512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.716296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.716431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.716459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.716474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.716487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.716517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.726360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.726503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.726536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.726567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.726584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.726615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.736423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.736553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.629 [2024-04-24 16:17:24.736580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.629 [2024-04-24 16:17:24.736595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.629 [2024-04-24 16:17:24.736607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.629 [2024-04-24 16:17:24.736637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.629 qpair failed and we were unable to recover it. 00:21:23.629 [2024-04-24 16:17:24.746395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.629 [2024-04-24 16:17:24.746510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.746537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.746552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.746565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.746595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.756454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.756584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.756611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.756626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.756639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.756669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.766455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.766568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.766603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.766623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.766641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.766673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.776522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.776685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.776711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.776726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.776738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.776778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.786548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.786691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.786718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.786733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.786758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.786790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.796576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.796705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.796731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.796754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.796768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.796798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.806563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.806699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.806724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.806740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.806762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.806792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.816657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.816823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.816850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.816865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.816877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.816907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.826632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.826770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.826796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.630 [2024-04-24 16:17:24.826811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.630 [2024-04-24 16:17:24.826823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.630 [2024-04-24 16:17:24.826852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.630 qpair failed and we were unable to recover it. 00:21:23.630 [2024-04-24 16:17:24.836674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.630 [2024-04-24 16:17:24.836820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.630 [2024-04-24 16:17:24.836846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.836862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.836874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.836903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.846688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.846836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.846862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.846877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.846889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.846918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.856738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.856885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.856911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.856926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.856944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.856975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.866779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.866911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.866938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.866953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.866965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.866995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.876805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.876929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.876955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.876970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.876983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.877013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.886814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.886982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.887008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.887024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.887036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.887066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.631 [2024-04-24 16:17:24.896837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.631 [2024-04-24 16:17:24.896963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.631 [2024-04-24 16:17:24.896989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.631 [2024-04-24 16:17:24.897004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.631 [2024-04-24 16:17:24.897017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.631 [2024-04-24 16:17:24.897046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.631 qpair failed and we were unable to recover it. 00:21:23.915 [2024-04-24 16:17:24.906907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.915 [2024-04-24 16:17:24.907046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.915 [2024-04-24 16:17:24.907073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.915 [2024-04-24 16:17:24.907089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.915 [2024-04-24 16:17:24.907104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.915 [2024-04-24 16:17:24.907134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.915 qpair failed and we were unable to recover it. 00:21:23.915 [2024-04-24 16:17:24.916912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.915 [2024-04-24 16:17:24.917054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.915 [2024-04-24 16:17:24.917081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.915 [2024-04-24 16:17:24.917096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.915 [2024-04-24 16:17:24.917108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.915 [2024-04-24 16:17:24.917138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.915 qpair failed and we were unable to recover it. 00:21:23.915 [2024-04-24 16:17:24.926925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.915 [2024-04-24 16:17:24.927064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.915 [2024-04-24 16:17:24.927093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.915 [2024-04-24 16:17:24.927108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.915 [2024-04-24 16:17:24.927121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.915 [2024-04-24 16:17:24.927151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.915 qpair failed and we were unable to recover it. 00:21:23.915 [2024-04-24 16:17:24.936982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.915 [2024-04-24 16:17:24.937125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.915 [2024-04-24 16:17:24.937152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.915 [2024-04-24 16:17:24.937167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.915 [2024-04-24 16:17:24.937180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.915 [2024-04-24 16:17:24.937209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.915 qpair failed and we were unable to recover it. 00:21:23.915 [2024-04-24 16:17:24.947025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.915 [2024-04-24 16:17:24.947190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.915 [2024-04-24 16:17:24.947218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.915 [2024-04-24 16:17:24.947243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.915 [2024-04-24 16:17:24.947258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.915 [2024-04-24 16:17:24.947302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.915 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:24.957017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:24.957149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:24.957175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:24.957190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:24.957203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:24.957233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:24.967068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:24.967205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:24.967231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:24.967246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:24.967259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:24.967288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:24.977043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:24.977169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:24.977195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:24.977210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:24.977223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:24.977252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:24.987091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:24.987219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:24.987245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:24.987260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:24.987273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:24.987314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:24.997093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:24.997222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:24.997247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:24.997262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:24.997275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:24.997303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:25.007129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:25.007296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:25.007321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:25.007336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:25.007348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:25.007377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:25.017150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:25.017277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:25.017303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:25.017318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:25.017330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:25.017360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:25.027245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:25.027384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:25.027410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:25.027425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:25.027437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:25.027466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:25.037237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:25.037372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.916 [2024-04-24 16:17:25.037403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.916 [2024-04-24 16:17:25.037419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.916 [2024-04-24 16:17:25.037432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.916 [2024-04-24 16:17:25.037461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.916 qpair failed and we were unable to recover it. 00:21:23.916 [2024-04-24 16:17:25.047255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.916 [2024-04-24 16:17:25.047396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.047421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.047436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.047448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.047477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.057278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.057406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.057432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.057447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.057459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.057488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.067330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.067483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.067508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.067523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.067535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.067564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.077352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.077476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.077503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.077518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.077530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.077577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.087374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.087513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.087538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.087553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.087566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.087595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.097371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.097503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.097528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.097543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.097555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.097583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.107431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.107566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.107592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.107607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.107620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.107661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.117449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.117583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.117609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.117624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.117636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.117665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.127486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.127627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.127659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.127674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.127687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.917 [2024-04-24 16:17:25.127716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.917 qpair failed and we were unable to recover it. 00:21:23.917 [2024-04-24 16:17:25.137499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.917 [2024-04-24 16:17:25.137636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.917 [2024-04-24 16:17:25.137662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.917 [2024-04-24 16:17:25.137677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.917 [2024-04-24 16:17:25.137690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.137718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.147526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.147656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.147682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.147697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.147709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.147738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.157556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.157691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.157720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.157753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.157770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.157800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.167607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.167770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.167796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.167811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.167823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.167862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.177619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.177764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.177791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.177805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.177818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.177847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.187647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.187793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.187820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.187835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.187847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.187889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:23.918 [2024-04-24 16:17:25.197657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:23.918 [2024-04-24 16:17:25.197825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:23.918 [2024-04-24 16:17:25.197849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:23.918 [2024-04-24 16:17:25.197863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:23.918 [2024-04-24 16:17:25.197876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:23.918 [2024-04-24 16:17:25.197905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:23.918 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.207750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.207896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.207921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.207936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.207949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.207978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.217770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.217952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.217978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.217993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.218005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.218034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.227771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.227906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.227932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.227947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.227959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.228001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.237816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.237991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.238017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.238032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.238044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.238073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.247848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.248017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.248043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.248057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.248069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.248098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.257852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.257986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.258013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.258027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.258045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.258088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.267882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.268025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.268051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.177 [2024-04-24 16:17:25.268065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.177 [2024-04-24 16:17:25.268078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.177 [2024-04-24 16:17:25.268107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.177 qpair failed and we were unable to recover it. 00:21:24.177 [2024-04-24 16:17:25.277926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.177 [2024-04-24 16:17:25.278107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.177 [2024-04-24 16:17:25.278133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.278148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.278161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.278202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.287970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.288125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.288151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.288166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.288178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.288219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.297993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.298141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.298167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.298182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.298194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.298223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.308021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.308156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.308182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.308197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.308209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.308238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.318044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.318179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.318204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.318219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.318232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.318261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.328072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.328213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.328238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.328253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.328265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.328294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.338098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.338234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.338260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.338274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.338287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.338315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.348108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.348309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.348337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.348360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.348376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.348406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.358140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.358290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.358316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.358332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.358344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.358374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.368196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.368345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.368371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.368385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.368397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.368426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.378182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.378365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.378391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.378406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.378418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.378448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.388257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.178 [2024-04-24 16:17:25.388387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.178 [2024-04-24 16:17:25.388414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.178 [2024-04-24 16:17:25.388429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.178 [2024-04-24 16:17:25.388442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.178 [2024-04-24 16:17:25.388484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.178 qpair failed and we were unable to recover it. 00:21:24.178 [2024-04-24 16:17:25.398404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.398545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.398571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.398586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.398598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.398627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.408328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.408464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.408492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.408507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.408520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.408550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.418344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.418485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.418511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.418526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.418539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.418568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.428411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.428573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.428598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.428613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.428625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.428655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.438350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.438479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.438505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.438526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.438539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.438568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.448490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.448630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.448655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.448670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.448683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.448712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.179 [2024-04-24 16:17:25.458425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.179 [2024-04-24 16:17:25.458557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.179 [2024-04-24 16:17:25.458584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.179 [2024-04-24 16:17:25.458599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.179 [2024-04-24 16:17:25.458612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.179 [2024-04-24 16:17:25.458653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.179 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.468446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.468598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.468623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.468638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.468651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.468681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.478471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.478607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.478634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.478648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.478661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.478690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.488549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.488684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.488710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.488725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.488737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.488775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.498527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.498696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.498722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.498737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.498759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.498789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.508558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.508709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.508735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.508759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.508772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.508801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.518621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.518803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.518829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.518844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.518856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.518885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.528617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.528763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.528795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.528810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.528823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.528852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.538708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.538859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.538886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.538901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.538913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.538942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.548692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.548836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.548878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.548894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.548906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.548949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.558703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.558846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.558872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.558887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.558900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.558929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.568757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.568900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.568925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.568940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.568952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.568987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.578795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.578940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.578966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.578981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.578993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.579022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.588807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.588940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.588966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.588980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.588993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.589022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.598838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.598996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.599021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.599036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.599049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.599078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.608879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.609017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.609042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.609057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.609069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.609099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.618874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.619043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.619074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.619091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.619103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.619132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.628919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.629053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.629079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.629094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.629107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.629135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.638953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.639095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.639122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.639137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.639150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.639179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.649028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.649198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.649224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.440 [2024-04-24 16:17:25.649239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.440 [2024-04-24 16:17:25.649251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.440 [2024-04-24 16:17:25.649292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.440 qpair failed and we were unable to recover it. 00:21:24.440 [2024-04-24 16:17:25.659019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.440 [2024-04-24 16:17:25.659159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.440 [2024-04-24 16:17:25.659186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.659201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.659226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.659257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.669063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.669202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.669229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.669244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.669257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.669286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.679066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.679204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.679230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.679245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.679257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.679287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.689165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.689326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.689355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.689372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.689385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.689416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.699122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.699265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.699291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.699306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.699322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.699352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.709151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.709344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.709369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.709385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.709397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.709427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.441 [2024-04-24 16:17:25.719205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.441 [2024-04-24 16:17:25.719341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.441 [2024-04-24 16:17:25.719366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.441 [2024-04-24 16:17:25.719381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.441 [2024-04-24 16:17:25.719394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.441 [2024-04-24 16:17:25.719424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.441 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.729258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.729398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.729424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.729439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.729452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.729481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.739242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.739410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.739436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.739451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.739464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.739493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.749297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.749426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.749452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.749472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.749486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.749515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.759329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.759476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.759502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.759517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.759530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.759560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.769382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.769551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.769577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.769592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.769604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.769633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.779429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.779610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.779635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.779650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.779663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.779692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.789396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.789568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.789594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.789609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.789622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.789651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.799508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.799645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.799672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.799687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.799700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.799729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.809496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.809676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.809702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.809717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.809729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.809766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.702 [2024-04-24 16:17:25.819463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.702 [2024-04-24 16:17:25.819607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.702 [2024-04-24 16:17:25.819633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.702 [2024-04-24 16:17:25.819648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.702 [2024-04-24 16:17:25.819661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.702 [2024-04-24 16:17:25.819690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.702 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.829505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.829640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.829666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.829680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.829693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.829722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.839550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.839685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.839711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.839735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.839758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.839788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.849566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.849714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.849740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.849764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.849777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.849819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.859669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.859812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.859838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.859853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.859865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.859894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.869622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.869764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.869791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.869806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.869819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.869860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.879653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.879814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.879840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.879855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.879868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.879897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.889716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.889864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.889890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.889904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.889917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.889947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.899722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.899859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.899886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.899901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.899913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.899943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.909722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.909865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.909892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.909907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.909920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.909950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.919787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.919920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.919947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.919961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.919974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.920003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.929822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.929966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.929996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.930012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.930025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.930054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.939824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.939964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.939990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.940005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.940017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.940046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.949894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.950038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.950064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.703 [2024-04-24 16:17:25.950079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.703 [2024-04-24 16:17:25.950091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.703 [2024-04-24 16:17:25.950120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.703 qpair failed and we were unable to recover it. 00:21:24.703 [2024-04-24 16:17:25.959888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.703 [2024-04-24 16:17:25.960045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.703 [2024-04-24 16:17:25.960070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.704 [2024-04-24 16:17:25.960085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.704 [2024-04-24 16:17:25.960097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.704 [2024-04-24 16:17:25.960126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.704 qpair failed and we were unable to recover it. 00:21:24.704 [2024-04-24 16:17:25.969940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.704 [2024-04-24 16:17:25.970078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.704 [2024-04-24 16:17:25.970105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.704 [2024-04-24 16:17:25.970120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.704 [2024-04-24 16:17:25.970132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.704 [2024-04-24 16:17:25.970180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.704 qpair failed and we were unable to recover it. 00:21:24.704 [2024-04-24 16:17:25.979934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.704 [2024-04-24 16:17:25.980103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.704 [2024-04-24 16:17:25.980129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.704 [2024-04-24 16:17:25.980143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.704 [2024-04-24 16:17:25.980156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.704 [2024-04-24 16:17:25.980197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.704 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:25.990023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:25.990161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:25.990186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:25.990202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:25.990215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:25.990244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.000005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.000144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.000170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.000184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.000197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.000226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.010033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.010174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.010200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.010215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.010228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.010257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.020061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.020237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.020269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.020285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.020297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.020327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.030093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.030232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.030258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.030273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.030286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.030315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.040090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.040221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.040247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.040262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.040275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.040304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.050171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.050305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.050330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.050345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.050357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.050386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.060171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.060308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.060334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.060349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.060368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.060398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.070227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.070347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.070373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.070388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.070400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.070429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.080247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.080384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.080409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.080425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.080437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.964 [2024-04-24 16:17:26.080466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.964 qpair failed and we were unable to recover it. 00:21:24.964 [2024-04-24 16:17:26.090336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.964 [2024-04-24 16:17:26.090474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.964 [2024-04-24 16:17:26.090499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.964 [2024-04-24 16:17:26.090514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.964 [2024-04-24 16:17:26.090527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.090556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.100261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.100404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.100429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.100444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.100456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.100486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.110356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.110511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.110538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.110553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.110566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.110607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.120342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.120474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.120500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.120515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.120528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.120557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.130395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.130576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.130601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.130616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.130629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.130658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.140386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.140523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.140548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.140563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.140576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.140604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.150404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.150544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.150569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.150585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.150603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.150632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.160453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.160608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.160634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.160649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.160662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.160691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.170474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.170642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.170668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.170683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.170696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.170725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.180533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.180669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.180695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.180709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.180722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.180760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.190529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.190666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.190692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.190707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.190719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.190760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.200589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.200759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.200784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.200799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.200811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.200840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.210590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.210780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.210806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.210821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.210833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.210862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.220607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.220759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.220785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.220800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.220813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.220842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.230648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.230793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.230820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.230835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.230847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.230876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:24.965 [2024-04-24 16:17:26.240711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:24.965 [2024-04-24 16:17:26.240872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:24.965 [2024-04-24 16:17:26.240898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:24.965 [2024-04-24 16:17:26.240919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:24.965 [2024-04-24 16:17:26.240932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:24.965 [2024-04-24 16:17:26.240962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.965 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.250694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.250884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.250910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.250924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.250937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.250966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.260770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.260951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.260977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.260991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.261004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.261033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.270753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.270889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.270914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.270928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.270941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.270970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.280786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.280922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.280947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.280961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.280973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.281003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.290822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.291005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.291030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.291045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.291058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.291087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.300887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.301033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.301058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.301073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.301085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.301114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.310877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.311015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.311041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.311057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.311069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.311098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.320883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.321050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.321076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.321091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.321103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.321132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.330959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.331093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.331123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.331139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.331152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.331193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.340960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.341101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.341128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.341146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.341160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.341201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.351007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.351153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.351182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.351197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.351209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.351239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.361006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.361137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.225 [2024-04-24 16:17:26.361164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.225 [2024-04-24 16:17:26.361178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.225 [2024-04-24 16:17:26.361191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.225 [2024-04-24 16:17:26.361220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.225 qpair failed and we were unable to recover it. 00:21:25.225 [2024-04-24 16:17:26.371062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.225 [2024-04-24 16:17:26.371214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.371240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.371255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.371270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.371305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.381063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.381199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.381225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.381240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.381252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.381282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.391106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.391261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.391288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.391303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.391319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.391350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.401098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.401231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.401257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.401272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.401284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.401314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.411208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.411345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.411372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.411387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.411400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.411441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.421211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.421380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.421412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.421427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.421440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.421469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.431196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.431339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.431365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.431380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.431393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.431422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.441221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.441358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.441384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.441398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.441411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.441441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.451313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.451481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.451507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.451522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.451536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.451566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.461329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.461473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.461500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.461519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.461537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.461569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.471357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.471500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.471526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.471541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.471554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.471583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.481338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.481470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.481496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.481511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.481524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.481553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.491420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.491587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.491615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.491631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.491644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.491673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.226 [2024-04-24 16:17:26.501407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.226 [2024-04-24 16:17:26.501561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.226 [2024-04-24 16:17:26.501587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.226 [2024-04-24 16:17:26.501602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.226 [2024-04-24 16:17:26.501614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.226 [2024-04-24 16:17:26.501643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.226 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.511457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.511595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.511621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.511636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.511648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.511678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.521451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.521583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.521608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.521623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.521637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.521666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.531516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.531657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.531683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.531698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.531710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.531738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.541502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.541638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.541663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.541678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.541691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.541720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.551545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.551686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.551711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.551726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.551754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.551787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.561608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.561755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.561781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.561796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.561808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.561838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.571606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.571767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.571793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.571808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.571821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.571851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.581633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.581782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.581808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.581823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.581836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.581865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.591651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.591815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.591841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.591857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.591869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.591898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.601696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.601861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.601886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.601902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.601914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.601943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.611777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.611940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.611966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.611981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.611993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.612022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.621834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.486 [2024-04-24 16:17:26.621976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.486 [2024-04-24 16:17:26.622001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.486 [2024-04-24 16:17:26.622015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.486 [2024-04-24 16:17:26.622028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.486 [2024-04-24 16:17:26.622057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.486 qpair failed and we were unable to recover it. 00:21:25.486 [2024-04-24 16:17:26.631791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.631928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.631954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.631969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.631981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.632011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.641792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.641927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.641953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.641974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.641987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.642017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.651842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.651985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.652010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.652025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.652037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.652066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.661949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.662096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.662123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.662142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.662155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.662184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.671899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.672033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.672059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.672074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.672087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.672116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.681930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.682065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.682091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.682106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.682118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.682148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.692027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.692184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.692210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.692225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.692237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.692278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.702008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.702176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.702202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.702216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.702229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.702259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.712017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.712197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.712223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.712238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.712250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.712279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.722085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.722233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.722259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.722273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.722286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.722315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.732195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.732348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.732380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.732396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.732408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.732437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.742086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.742221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.742247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.742262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.742275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.742304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.752106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.752236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.752261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.752276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.487 [2024-04-24 16:17:26.752289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.487 [2024-04-24 16:17:26.752318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.487 qpair failed and we were unable to recover it. 00:21:25.487 [2024-04-24 16:17:26.762133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.487 [2024-04-24 16:17:26.762267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.487 [2024-04-24 16:17:26.762292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.487 [2024-04-24 16:17:26.762307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.488 [2024-04-24 16:17:26.762319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.488 [2024-04-24 16:17:26.762349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.488 qpair failed and we were unable to recover it. 00:21:25.746 [2024-04-24 16:17:26.772304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.746 [2024-04-24 16:17:26.772492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.746 [2024-04-24 16:17:26.772517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.746 [2024-04-24 16:17:26.772533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.746 [2024-04-24 16:17:26.772546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.746 [2024-04-24 16:17:26.772580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.746 qpair failed and we were unable to recover it. 00:21:25.746 [2024-04-24 16:17:26.782225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.746 [2024-04-24 16:17:26.782358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.746 [2024-04-24 16:17:26.782383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.746 [2024-04-24 16:17:26.782398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.746 [2024-04-24 16:17:26.782411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.746 [2024-04-24 16:17:26.782440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.792289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.792424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.792450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.792464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.792477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.792506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.802259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.802388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.802413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.802428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.802441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.802482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.812337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.812508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.812535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.812554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.812567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.812596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.822316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.822463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.822495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.822511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.822523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.822552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.832354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.832494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.832521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.832536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.832548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.832578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.842387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.842529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.842555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.842570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.842582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.842612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.852432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.852620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.852646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.852660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.852673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.852702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.862451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.862627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.862653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.862668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.862681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.862729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.872466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.872641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.872667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.872682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.872694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.872724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.882484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.882613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.882639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.882654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.882666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.882695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.892560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.892707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.892733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.892758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.892772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.892802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.902557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.902694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.902720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.902735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.902756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.902787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.912619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.747 [2024-04-24 16:17:26.912766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.747 [2024-04-24 16:17:26.912793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.747 [2024-04-24 16:17:26.912809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.747 [2024-04-24 16:17:26.912821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.747 [2024-04-24 16:17:26.912851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.747 qpair failed and we were unable to recover it. 00:21:25.747 [2024-04-24 16:17:26.922600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.922748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.922775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.922790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.922803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.922832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.932725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.932897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.932923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.932937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.932950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.932979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.942691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.942833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.942859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.942874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.942886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.942928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.952679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.952815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.952841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.952856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.952875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.952905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.962718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.962855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.962881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.962896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.962908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.962937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.972777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.972938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.972963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.972978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.972991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.973020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.982789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.982934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.982960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.982974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.982987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.983029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:26.992825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:26.992989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:26.993015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:26.993030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:26.993042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:26.993072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:27.002837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:27.002974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:27.003001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:27.003016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:27.003029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:27.003071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:27.012871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:27.013012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:27.013038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:27.013053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:27.013066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:27.013095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:25.748 [2024-04-24 16:17:27.022886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:25.748 [2024-04-24 16:17:27.023023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:25.748 [2024-04-24 16:17:27.023049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:25.748 [2024-04-24 16:17:27.023065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:25.748 [2024-04-24 16:17:27.023077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:25.748 [2024-04-24 16:17:27.023119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:25.748 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.032909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.033044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.033070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.033085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.033098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.033127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.042995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.043164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.043189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.043210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.043223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.043252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.052990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.053127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.053152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.053167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.053180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.053208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.063019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.063157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.063183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.063198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.063210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.063239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.073042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.073173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.073200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.073215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.073227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.073256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.083061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.083189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.083215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.083230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.083242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.083271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.093103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.093241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.093266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.093281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.093293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.093322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.103099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.103237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.103263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.103278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.103291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.103320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.113186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.113333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.113359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.113374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.113387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.113415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.123219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.123378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.123404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.123418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.123431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.123460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.133263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.133443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.133468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.133488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.133502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.133531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.143259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.143394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.143419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.143434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.143447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.143488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.153244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.153374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.153400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.153415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.153427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.153456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.163372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.163525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.163552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.163567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.163580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.163609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.008 [2024-04-24 16:17:27.173350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.008 [2024-04-24 16:17:27.173492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.008 [2024-04-24 16:17:27.173519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.008 [2024-04-24 16:17:27.173533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.008 [2024-04-24 16:17:27.173546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.008 [2024-04-24 16:17:27.173575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.008 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.183346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.183489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.183514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.183529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.183542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.183571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.193371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.193509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.193535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.193551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.193563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.193593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.203406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.203543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.203568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.203583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.203595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.203631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.213488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.213669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.213695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.213709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.213731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.213771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.223465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.223608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.223639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.223655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.223668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.223698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.233504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.233640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.233667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.233682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.233694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.233750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.243512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.243679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.243705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.243720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.243732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.243782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.253545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.253729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.253766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.253783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.253795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.253825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.263599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.263793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.263819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.263834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.263847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.263895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.273619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.273802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.273829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.273844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.273856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.273886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.009 [2024-04-24 16:17:27.283585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.009 [2024-04-24 16:17:27.283718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.009 [2024-04-24 16:17:27.283751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.009 [2024-04-24 16:17:27.283768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.009 [2024-04-24 16:17:27.283781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.009 [2024-04-24 16:17:27.283810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.009 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.293701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.293896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.293922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.293938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.293951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.293981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.303680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.303830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.303856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.303870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.303883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.303913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.313712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.313849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.313884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.313900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.313913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.313955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.323726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.323909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.323935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.323950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.323962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.323991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.333808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.333983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.334009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.334023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.334035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.334065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.343793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.343936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.343962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.343977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.343990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.344019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.353844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.353977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.354003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.354018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.354036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.354066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.271 [2024-04-24 16:17:27.363836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.271 [2024-04-24 16:17:27.363956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.271 [2024-04-24 16:17:27.363982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.271 [2024-04-24 16:17:27.363997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.271 [2024-04-24 16:17:27.364009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.271 [2024-04-24 16:17:27.364038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.271 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.373909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.374052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.374077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.374092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.374105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.374134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.383896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.384057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.384082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.384097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.384110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.384139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.393950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.394086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.394112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.394127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.394139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.394168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.403987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.404164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.404190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.404205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.404217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.404246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.414028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.414214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.414248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.414275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.414289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.414322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.424027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.424158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.424185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.424200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.424213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.424242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.434047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.434178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.434204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.434219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.434232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.434262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.444046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.444173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.444200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.444220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.444234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.444263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.454150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.454317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.454342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.454356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.454369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.454398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.464148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.464282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.464308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.464322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.464335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.464363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.474193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.474334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.474360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.474374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.474387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.474416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.484206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.484340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.484366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.484380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.484393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.484422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.272 [2024-04-24 16:17:27.494221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.272 [2024-04-24 16:17:27.494404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.272 [2024-04-24 16:17:27.494430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.272 [2024-04-24 16:17:27.494445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.272 [2024-04-24 16:17:27.494457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.272 [2024-04-24 16:17:27.494486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.272 qpair failed and we were unable to recover it. 00:21:26.273 [2024-04-24 16:17:27.504239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.273 [2024-04-24 16:17:27.504374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.273 [2024-04-24 16:17:27.504399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.273 [2024-04-24 16:17:27.504414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.273 [2024-04-24 16:17:27.504426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.273 [2024-04-24 16:17:27.504456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.273 qpair failed and we were unable to recover it. 00:21:26.273 [2024-04-24 16:17:27.514247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.273 [2024-04-24 16:17:27.514385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.273 [2024-04-24 16:17:27.514411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.273 [2024-04-24 16:17:27.514427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.273 [2024-04-24 16:17:27.514439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.273 [2024-04-24 16:17:27.514468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.273 qpair failed and we were unable to recover it. 00:21:26.273 [2024-04-24 16:17:27.524342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.273 [2024-04-24 16:17:27.524468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.273 [2024-04-24 16:17:27.524494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.273 [2024-04-24 16:17:27.524510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.273 [2024-04-24 16:17:27.524522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.273 [2024-04-24 16:17:27.524563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.273 qpair failed and we were unable to recover it. 00:21:26.273 [2024-04-24 16:17:27.534347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.273 [2024-04-24 16:17:27.534490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.273 [2024-04-24 16:17:27.534516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.273 [2024-04-24 16:17:27.534537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.273 [2024-04-24 16:17:27.534550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.273 [2024-04-24 16:17:27.534580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.273 qpair failed and we were unable to recover it. 00:21:26.273 [2024-04-24 16:17:27.544380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.273 [2024-04-24 16:17:27.544516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.273 [2024-04-24 16:17:27.544541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.273 [2024-04-24 16:17:27.544557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.273 [2024-04-24 16:17:27.544569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.273 [2024-04-24 16:17:27.544599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.273 qpair failed and we were unable to recover it. 00:21:26.535 [2024-04-24 16:17:27.554533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.535 [2024-04-24 16:17:27.554695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.535 [2024-04-24 16:17:27.554722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.535 [2024-04-24 16:17:27.554737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.535 [2024-04-24 16:17:27.554761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.535 [2024-04-24 16:17:27.554791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.535 qpair failed and we were unable to recover it. 00:21:26.535 [2024-04-24 16:17:27.564402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.535 [2024-04-24 16:17:27.564530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.535 [2024-04-24 16:17:27.564556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.535 [2024-04-24 16:17:27.564571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.535 [2024-04-24 16:17:27.564584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.535 [2024-04-24 16:17:27.564613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.535 qpair failed and we were unable to recover it. 00:21:26.535 [2024-04-24 16:17:27.574484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.535 [2024-04-24 16:17:27.574618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.535 [2024-04-24 16:17:27.574644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.535 [2024-04-24 16:17:27.574658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.535 [2024-04-24 16:17:27.574670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.535 [2024-04-24 16:17:27.574700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.535 qpair failed and we were unable to recover it. 00:21:26.535 [2024-04-24 16:17:27.584503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.535 [2024-04-24 16:17:27.584647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.535 [2024-04-24 16:17:27.584672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.535 [2024-04-24 16:17:27.584687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.535 [2024-04-24 16:17:27.584699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.535 [2024-04-24 16:17:27.584729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.594516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.594651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.594681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.594697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.594709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.594739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.604503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.604679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.604705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.604720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.604733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.604772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.614564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.614738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.614773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.614788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.614801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.614830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.624584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.624720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.624763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.624780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.624793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.624822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.634622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.634760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.634786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.634801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.634814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.634843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.644644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.644823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.644851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.644870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.644883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.644914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.654662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.654801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.654828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.654843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.654855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.654884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.664692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.664832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.664860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.664875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.664888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.664924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.674731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.674857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.674884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.674899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.674912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.674954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.684800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.684927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.684953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.684968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.684981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.685029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.694842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.694989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.695014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.695030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.695042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.695071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.704823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.704999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.705025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.705040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.705053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.705082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.714835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.714968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.536 [2024-04-24 16:17:27.714999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.536 [2024-04-24 16:17:27.715016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.536 [2024-04-24 16:17:27.715028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.536 [2024-04-24 16:17:27.715057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.536 qpair failed and we were unable to recover it. 00:21:26.536 [2024-04-24 16:17:27.724876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.536 [2024-04-24 16:17:27.724997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.725023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.725038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.725050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.725091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.734929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.735058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.735084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.735098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.735111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.735139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.744952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.745106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.745131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.745145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.745158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.745186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.754958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.755092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.755118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.755133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.755151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.755181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.764981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.765111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.765136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.765151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.765163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.765192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.775029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.775182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.775206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.775221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.775234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.775263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.785068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.785237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.785262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.785277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.785289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.785318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.795060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.795196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.795222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.795237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.795249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.795278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.805103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.805235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.805261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.805275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.805287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.805317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.537 [2024-04-24 16:17:27.815125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.537 [2024-04-24 16:17:27.815264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.537 [2024-04-24 16:17:27.815290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.537 [2024-04-24 16:17:27.815304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.537 [2024-04-24 16:17:27.815317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.537 [2024-04-24 16:17:27.815345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.537 qpair failed and we were unable to recover it. 00:21:26.798 [2024-04-24 16:17:27.825185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.798 [2024-04-24 16:17:27.825363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.798 [2024-04-24 16:17:27.825389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.798 [2024-04-24 16:17:27.825404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.798 [2024-04-24 16:17:27.825416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.798 [2024-04-24 16:17:27.825445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.798 qpair failed and we were unable to recover it. 00:21:26.798 [2024-04-24 16:17:27.835208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.798 [2024-04-24 16:17:27.835384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.798 [2024-04-24 16:17:27.835410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.798 [2024-04-24 16:17:27.835424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.798 [2024-04-24 16:17:27.835437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.798 [2024-04-24 16:17:27.835466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.798 qpair failed and we were unable to recover it. 00:21:26.798 [2024-04-24 16:17:27.845213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.798 [2024-04-24 16:17:27.845340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.798 [2024-04-24 16:17:27.845364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.798 [2024-04-24 16:17:27.845379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.798 [2024-04-24 16:17:27.845397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.798 [2024-04-24 16:17:27.845428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.798 qpair failed and we were unable to recover it. 00:21:26.798 [2024-04-24 16:17:27.855310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.798 [2024-04-24 16:17:27.855472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.798 [2024-04-24 16:17:27.855497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.798 [2024-04-24 16:17:27.855512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.798 [2024-04-24 16:17:27.855524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.798 [2024-04-24 16:17:27.855553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.798 qpair failed and we were unable to recover it. 00:21:26.798 [2024-04-24 16:17:27.865290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.798 [2024-04-24 16:17:27.865430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.865454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.865468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.865481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.865509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.875301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.875439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.875464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.875479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.875491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.875520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.885314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.885441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.885467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.885481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.885493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.885522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.895397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.895577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.895606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.895622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.895635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.895677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.905391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.905523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.905550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.905564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.905576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.905607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.915416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.915582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.915609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.915624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.915636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.915666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.925420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.925565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.925591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.925606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.925619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.925649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.935476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.935615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.935642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.935662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.935675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.935705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.945516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.945642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.945668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.945682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.945695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.945723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.955540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.955670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.955696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.955711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.955723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.955763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.965609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.965738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.965771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.965787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.965799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.965828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.975589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.975716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.975748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.975765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.975778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.975807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.985641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.799 [2024-04-24 16:17:27.985783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.799 [2024-04-24 16:17:27.985806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.799 [2024-04-24 16:17:27.985820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.799 [2024-04-24 16:17:27.985833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.799 [2024-04-24 16:17:27.985862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.799 qpair failed and we were unable to recover it. 00:21:26.799 [2024-04-24 16:17:27.995633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:27.995774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:27.995800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:27.995815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:27.995827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:27.995857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.005669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.005820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.005847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.005866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.005879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.005910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.015697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.015841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.015866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.015881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.015894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.015923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.025761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.025919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.025950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.025966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.025979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.026009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.035749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.035889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.035915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.035930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.035943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.035973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.045766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.045899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.045924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.045940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.045953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.045983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.055944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.056099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.056124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.056139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.056151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.056181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.065845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.065988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.066013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.066028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.066041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.066076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:26.800 [2024-04-24 16:17:28.075867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:26.800 [2024-04-24 16:17:28.076008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:26.800 [2024-04-24 16:17:28.076032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:26.800 [2024-04-24 16:17:28.076047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:26.800 [2024-04-24 16:17:28.076060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:26.800 [2024-04-24 16:17:28.076104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.800 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.085911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.086053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.086078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.086092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.086105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.086134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.095954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.096136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.096161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.096176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.096189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.096219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.105971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.106117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.106141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.106156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.106169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.106199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.115977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.116124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.116156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.116172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.116185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.116215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.125991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.126121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.126146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.126160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.126174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.126203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.136058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.136202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.136227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.136242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.136256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.136286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.146081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.146218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.146243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.146258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.146271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.146301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.156093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.156277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.156302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.156317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.156336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.156367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.166121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.166270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.166297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.166312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.166325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.166355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.176153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.176295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.176321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.176336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.176349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.176378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.186181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.186316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.186342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.186358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.186371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.186400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.196271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.196441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.062 [2024-04-24 16:17:28.196467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.062 [2024-04-24 16:17:28.196482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.062 [2024-04-24 16:17:28.196510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.062 [2024-04-24 16:17:28.196541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.062 qpair failed and we were unable to recover it. 00:21:27.062 [2024-04-24 16:17:28.206251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.062 [2024-04-24 16:17:28.206394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.206420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.206434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.206448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.206477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.216295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.216442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.216467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.216483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.216498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.216528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.226308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.226465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.226490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.226505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.226532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.226562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.236353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.236530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.236569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.236583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.236596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.236626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.246346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.246490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.246514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.246530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.246552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.246598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.256416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.256560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.256585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.256599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.256613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.256642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.266427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.266569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.266597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.266613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.266626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.266671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.276464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.276644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.276670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.276685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.276699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.276750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.286456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.286584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.286609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.286624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.286637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.286667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.296517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.296660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.296685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.296700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.296713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.296749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.306525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.306662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.306689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.306705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.306718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.306757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.316561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.316692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.316719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.316736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.316759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.316790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.326565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.326694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.326720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.326736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.063 [2024-04-24 16:17:28.326760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.063 [2024-04-24 16:17:28.326790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.063 qpair failed and we were unable to recover it. 00:21:27.063 [2024-04-24 16:17:28.336600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.063 [2024-04-24 16:17:28.336750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.063 [2024-04-24 16:17:28.336776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.063 [2024-04-24 16:17:28.336797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.064 [2024-04-24 16:17:28.336811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.064 [2024-04-24 16:17:28.336842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.064 qpair failed and we were unable to recover it. 00:21:27.358 [2024-04-24 16:17:28.346700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.358 [2024-04-24 16:17:28.346864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.358 [2024-04-24 16:17:28.346892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.358 [2024-04-24 16:17:28.346909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.358 [2024-04-24 16:17:28.346921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.358 [2024-04-24 16:17:28.346952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.358 qpair failed and we were unable to recover it. 00:21:27.358 [2024-04-24 16:17:28.356655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.358 [2024-04-24 16:17:28.356806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.358 [2024-04-24 16:17:28.356832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.358 [2024-04-24 16:17:28.356848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.358 [2024-04-24 16:17:28.356861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.358 [2024-04-24 16:17:28.356891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.358 qpair failed and we were unable to recover it. 00:21:27.358 [2024-04-24 16:17:28.366691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.358 [2024-04-24 16:17:28.366842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.358 [2024-04-24 16:17:28.366868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.358 [2024-04-24 16:17:28.366884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.358 [2024-04-24 16:17:28.366897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.358 [2024-04-24 16:17:28.366926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.358 qpair failed and we were unable to recover it. 00:21:27.358 [2024-04-24 16:17:28.376767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.358 [2024-04-24 16:17:28.376916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.358 [2024-04-24 16:17:28.376942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.358 [2024-04-24 16:17:28.376957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.358 [2024-04-24 16:17:28.376970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.358 [2024-04-24 16:17:28.377000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.358 qpair failed and we were unable to recover it. 00:21:27.358 [2024-04-24 16:17:28.386725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.358 [2024-04-24 16:17:28.386873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.358 [2024-04-24 16:17:28.386899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.386915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.386928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.386958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.396801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.396944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.396969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.396984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.396997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.397027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.406864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.407101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.407127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.407143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.407155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.407185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.416897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.417077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.417104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.417120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.417133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.417162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.426899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.427062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.427095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.427112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.427124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.427155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.436943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.437083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.437111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.437127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.437140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.437182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.446983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.447148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.447176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.447192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.447223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.447254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.456970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.457121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.457146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.457161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.457174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.457204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.467028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.467219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.467246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.467261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.467274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.467310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.477023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.477176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.477203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.477233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.477246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.477275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.487078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.487227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.487253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.487268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.487281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.487335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.497086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.497237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.497263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.497279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.497291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.497321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.507164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.507327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.507353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.507368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.507383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.359 [2024-04-24 16:17:28.507426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.359 qpair failed and we were unable to recover it. 00:21:27.359 [2024-04-24 16:17:28.517103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.359 [2024-04-24 16:17:28.517233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.359 [2024-04-24 16:17:28.517265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.359 [2024-04-24 16:17:28.517281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.359 [2024-04-24 16:17:28.517294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.517323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.527137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.527271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.527297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.527313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.527326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.527356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.537167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.537351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.537377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.537393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.537406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.537436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.547221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.547365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.547391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.547407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.547420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.547465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.557246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.557388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.557415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.557434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.557447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.557510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.567273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.567425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.567452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.567468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.567480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.567537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.577274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.577419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.577446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.577462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.577474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.577504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.587323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.587470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.587498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.587517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.587530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.587575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.597346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.597484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.597511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.597527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.597539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.597585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.607342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.607482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.607508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.607524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.607536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.607565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.617388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.617537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.617562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.617577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.617591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.617620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.360 [2024-04-24 16:17:28.627398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.360 [2024-04-24 16:17:28.627555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.360 [2024-04-24 16:17:28.627585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.360 [2024-04-24 16:17:28.627602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.360 [2024-04-24 16:17:28.627614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.360 [2024-04-24 16:17:28.627659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.360 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.637425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.637565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.637591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.637607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.637620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.637649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.647441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.647576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.647613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.647628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.647647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.647677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.657476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.657612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.657638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.657654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.657666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.657696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.667579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.667734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.667768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.667784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.667797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.667827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.677531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.677674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.677701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.677717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.677730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.677774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.687556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.687687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.687715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.687731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.687753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.687798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.697612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.697766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.697794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.697810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.697823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.697853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.707652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.707807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.707834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.707849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.707862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.707892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.717652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.717836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.717863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.717879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.717892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.717921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.727695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.727840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.727866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.727882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.727895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.727925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.737715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.737871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.737901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.737940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.737956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.737986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.747776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.639 [2024-04-24 16:17:28.747915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.639 [2024-04-24 16:17:28.747945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.639 [2024-04-24 16:17:28.747963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.639 [2024-04-24 16:17:28.747976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.639 [2024-04-24 16:17:28.748007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.639 qpair failed and we were unable to recover it. 00:21:27.639 [2024-04-24 16:17:28.757788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.757925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.757952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.757967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.757980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.758010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.767802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.767937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.767964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.767980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.767993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.768048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.777868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.778057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.778083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.778099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.778111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.778147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.787872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.788020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.788046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.788065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.788077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.788121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.797887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.798016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.798045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.798061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.798073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.798124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.807899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.808028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.808065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.808081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.808093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.808124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.818013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.818169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.818195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.818210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.818223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.818252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.828070] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.828201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.828231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.828262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.828276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.828305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.837997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.838128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.838155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.838170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.838184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.838213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.848070] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.848260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.848300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.848315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.848328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.848357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.858074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.858222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.858249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.858264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.858277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.858306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.868110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.868250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.868276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.868292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.868305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.868349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.878116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.878263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.878291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.878307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.640 [2024-04-24 16:17:28.878319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.640 [2024-04-24 16:17:28.878349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.640 qpair failed and we were unable to recover it. 00:21:27.640 [2024-04-24 16:17:28.888150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.640 [2024-04-24 16:17:28.888292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.640 [2024-04-24 16:17:28.888319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.640 [2024-04-24 16:17:28.888335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.641 [2024-04-24 16:17:28.888348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.641 [2024-04-24 16:17:28.888378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.641 qpair failed and we were unable to recover it. 00:21:27.641 [2024-04-24 16:17:28.898197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.641 [2024-04-24 16:17:28.898373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.641 [2024-04-24 16:17:28.898401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.641 [2024-04-24 16:17:28.898417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.641 [2024-04-24 16:17:28.898430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.641 [2024-04-24 16:17:28.898472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.641 qpair failed and we were unable to recover it. 00:21:27.641 [2024-04-24 16:17:28.908211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.641 [2024-04-24 16:17:28.908352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.641 [2024-04-24 16:17:28.908379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.641 [2024-04-24 16:17:28.908394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.641 [2024-04-24 16:17:28.908407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.641 [2024-04-24 16:17:28.908437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.641 qpair failed and we were unable to recover it. 00:21:27.641 [2024-04-24 16:17:28.918247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.641 [2024-04-24 16:17:28.918384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.641 [2024-04-24 16:17:28.918417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.641 [2024-04-24 16:17:28.918434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.641 [2024-04-24 16:17:28.918447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.641 [2024-04-24 16:17:28.918492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.641 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.928339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.928479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.928504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.928520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.928533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.928563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.938303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.938445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.938472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.938487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.938500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.938529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.948439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.948598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.948624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.948639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.948652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.948695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.958353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.958497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.958524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.958539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.958552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.958588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.968364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.968505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.968531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.968547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.968560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.968589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.978432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.978573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.978599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.978615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.978627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.978669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.988432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.988575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.988601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.988616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.988629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.988659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:28.998497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:28.998639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:28.998665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:28.998680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:28.998693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:28.998737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:29.008496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:29.008634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:29.008664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:29.008681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:29.008694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:29.008724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:29.018552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:29.018692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:29.018718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:29.018733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:29.018753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:29.018784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:29.028588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:29.028736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:29.028783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:29.028798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:29.028811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:29.028841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:29.038622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:29.038766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:29.038792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:29.038807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:29.038820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.902 [2024-04-24 16:17:29.038849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.902 qpair failed and we were unable to recover it. 00:21:27.902 [2024-04-24 16:17:29.048607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.902 [2024-04-24 16:17:29.048787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.902 [2024-04-24 16:17:29.048813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.902 [2024-04-24 16:17:29.048828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.902 [2024-04-24 16:17:29.048847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.048878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.058663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.058810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.058835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.058851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.058864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.058894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.068683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.068852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.068879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.068895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.068908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.068939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.078700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.078849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.078875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.078890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.078903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.078933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.088731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.088874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.088900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.088915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.088928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.088958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.098783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.098926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.098952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.098967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.098980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.099010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.108819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.108953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.108978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.108994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.109007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.109052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.118962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.119115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.119141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.119155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.119168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.119198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.128872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.129013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.129038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.129053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.129066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.129096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.138885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.139027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.139051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.139072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.139086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.139116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.148955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.149130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.149156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.149188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.149203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.149233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.158928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.159062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.159088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.159104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.159117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.159146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.168974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.169111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.169138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.169153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.169166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.169196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:27.903 [2024-04-24 16:17:29.178993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:27.903 [2024-04-24 16:17:29.179134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:27.903 [2024-04-24 16:17:29.179161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:27.903 [2024-04-24 16:17:29.179176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:27.903 [2024-04-24 16:17:29.179189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:27.903 [2024-04-24 16:17:29.179218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.903 qpair failed and we were unable to recover it. 00:21:28.165 [2024-04-24 16:17:29.189090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.165 [2024-04-24 16:17:29.189237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.165 [2024-04-24 16:17:29.189262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.165 [2024-04-24 16:17:29.189292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.165 [2024-04-24 16:17:29.189304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.165 [2024-04-24 16:17:29.189348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.165 qpair failed and we were unable to recover it. 00:21:28.165 [2024-04-24 16:17:29.199072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.165 [2024-04-24 16:17:29.199206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.165 [2024-04-24 16:17:29.199231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.165 [2024-04-24 16:17:29.199246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.165 [2024-04-24 16:17:29.199260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.165 [2024-04-24 16:17:29.199289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.165 qpair failed and we were unable to recover it. 00:21:28.165 [2024-04-24 16:17:29.209080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.209219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.209245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.209261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.209274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.209319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.219109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.219251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.219276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.219291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.219304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.219334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.229240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.229381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.229406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.229427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.229441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.229470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.239166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.239302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.239327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.239342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.239355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.239385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.249207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.249344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.249370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.249385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.249398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.249427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.259217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.259364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.259389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.259404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.259416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.259446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.269276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.269420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.269445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.269460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.269473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.269502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.279259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.279405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.279431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.279446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.279459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.279489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.289290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.289423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.289448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.289463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.289476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.289506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.299376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.299535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.299562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.299578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.299594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.299627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.309388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.309534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.309560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.309575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.309588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.309617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.319410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.319550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.319580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.319597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.319610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.319655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.329432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.329566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.329590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.329605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.329619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.166 [2024-04-24 16:17:29.329648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.166 qpair failed and we were unable to recover it. 00:21:28.166 [2024-04-24 16:17:29.339505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.166 [2024-04-24 16:17:29.339651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.166 [2024-04-24 16:17:29.339675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.166 [2024-04-24 16:17:29.339691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.166 [2024-04-24 16:17:29.339704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.339733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.349482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.349625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.349650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.349664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.349677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.349706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.359524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.359658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.359683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.359698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.359711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.359754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.369549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.369727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.369763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.369780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.369793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.369823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.379580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.379732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.379766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.379782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.379796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.379826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.389611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.389774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.389799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.389815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.389828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.389857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.399646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.399796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.399822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.399837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.399850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.399880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.409713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.409858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.409889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.409906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.409920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.409950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.419736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.419926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.419952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.419967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.419981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.420013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.429750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.429903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.429930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.429945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.429958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.429988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.167 [2024-04-24 16:17:29.439750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.167 [2024-04-24 16:17:29.439892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.167 [2024-04-24 16:17:29.439917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.167 [2024-04-24 16:17:29.439932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.167 [2024-04-24 16:17:29.439946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.167 [2024-04-24 16:17:29.439976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.167 qpair failed and we were unable to recover it. 00:21:28.429 [2024-04-24 16:17:29.449777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.429 [2024-04-24 16:17:29.449898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.429 [2024-04-24 16:17:29.449925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.429 [2024-04-24 16:17:29.449941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.429 [2024-04-24 16:17:29.449963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.429 [2024-04-24 16:17:29.449995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.429 qpair failed and we were unable to recover it. 00:21:28.429 [2024-04-24 16:17:29.459848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.429 [2024-04-24 16:17:29.460003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.429 [2024-04-24 16:17:29.460029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.429 [2024-04-24 16:17:29.460044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.429 [2024-04-24 16:17:29.460057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.429 [2024-04-24 16:17:29.460087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.429 qpair failed and we were unable to recover it. 00:21:28.429 [2024-04-24 16:17:29.469850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.429 [2024-04-24 16:17:29.469981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.429 [2024-04-24 16:17:29.470007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.429 [2024-04-24 16:17:29.470022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.429 [2024-04-24 16:17:29.470036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.429 [2024-04-24 16:17:29.470078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.429 qpair failed and we were unable to recover it. 00:21:28.429 [2024-04-24 16:17:29.479893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.429 [2024-04-24 16:17:29.480034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.429 [2024-04-24 16:17:29.480060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.429 [2024-04-24 16:17:29.480075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.429 [2024-04-24 16:17:29.480088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.429 [2024-04-24 16:17:29.480118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.429 qpair failed and we were unable to recover it. 00:21:28.429 [2024-04-24 16:17:29.489878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.429 [2024-04-24 16:17:29.490021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.429 [2024-04-24 16:17:29.490048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.429 [2024-04-24 16:17:29.490064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.429 [2024-04-24 16:17:29.490077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.429 [2024-04-24 16:17:29.490107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.429 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.499951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.500102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.500132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.500151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.500165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.500196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.510057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.510203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.510230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.510247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.510259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.510289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.520032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.520173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.520201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.520232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.520245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.520275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.530003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.530137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.530162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.530177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.530191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.530221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.540079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.540221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.540247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.540263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.540282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.540313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.550082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.550222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.550247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.550263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.550276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.550321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.560106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.560240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.560266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.560282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.560295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.560324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.570097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.570237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.570263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.570278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.570292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.570322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.580247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.580389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.580415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.580431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.580444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.580474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.590196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.590341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.590368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.590384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.590396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.590439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.600256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.600401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.600428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.600443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.600456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.600500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.610225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.610368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.610394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.610409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.610421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.610451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.620271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.620443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.620469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.430 [2024-04-24 16:17:29.620485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.430 [2024-04-24 16:17:29.620498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.430 [2024-04-24 16:17:29.620540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.430 qpair failed and we were unable to recover it. 00:21:28.430 [2024-04-24 16:17:29.630316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.430 [2024-04-24 16:17:29.630452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.430 [2024-04-24 16:17:29.630476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.630497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.630511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.630541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.640311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.640450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.640475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.640491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.640504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.640533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.650356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.650490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.650516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.650531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.650544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.650573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.660393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.660534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.660559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.660574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.660587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.660617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.670457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.670607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.670633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.670648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.670662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.670691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.680440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.680582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.680608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.680623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.680636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.680681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.690475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.690618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.690643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.690658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.690672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.690701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.700531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.700674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.700698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.700714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.700727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.700764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.431 [2024-04-24 16:17:29.710666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.431 [2024-04-24 16:17:29.710811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.431 [2024-04-24 16:17:29.710837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.431 [2024-04-24 16:17:29.710852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.431 [2024-04-24 16:17:29.710865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.431 [2024-04-24 16:17:29.710895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.431 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.720597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.720779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.720810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.720827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.720840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.720883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.730581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.730717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.730751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.730768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.730781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.730812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.740658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.740809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.740834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.740849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.740862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.740892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.750654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.750799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.750825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.750840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.750853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.750883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.760680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.760857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.760883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.760898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.760911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.760947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.770703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.770858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.770884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.770899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.770913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.770943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.780730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.691 [2024-04-24 16:17:29.780872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.691 [2024-04-24 16:17:29.780897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.691 [2024-04-24 16:17:29.780912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.691 [2024-04-24 16:17:29.780925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.691 [2024-04-24 16:17:29.780955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.691 qpair failed and we were unable to recover it. 00:21:28.691 [2024-04-24 16:17:29.790755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.790887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.790913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.790929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.790942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.790972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.800791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.800975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.801001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.801016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.801030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.801074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.810828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.810993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.811025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.811042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.811055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.811100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.820887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.821031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.821059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.821074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.821087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.821117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.830891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.831041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.831068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.831084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.831097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.831126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.840905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.841036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.841063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.841078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.841092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.841121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.850929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.851057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.851084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.851100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.851119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.851149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.860987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.861126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.861154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.861170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.861182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.861212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.870995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.871128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.871155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.871171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.871185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.871215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.881007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.881145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.881171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.881186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.881198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.881229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.891066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.891213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.891239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.891254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.891267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.891298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.901117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.901270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.901296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.692 [2024-04-24 16:17:29.901311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.692 [2024-04-24 16:17:29.901325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.692 [2024-04-24 16:17:29.901355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.692 qpair failed and we were unable to recover it. 00:21:28.692 [2024-04-24 16:17:29.911099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.692 [2024-04-24 16:17:29.911253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.692 [2024-04-24 16:17:29.911279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.911294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.911308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.911338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.921160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.921303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.921331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.921346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.921359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.921390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.931187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.931319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.931345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.931360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.931373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.931415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.941233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.941383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.941412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.941426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.941446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.941476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.951237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.951375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.951401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.951417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.951430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.951459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.961303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.961440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.961466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.961482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.961495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.961539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.693 [2024-04-24 16:17:29.971246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.693 [2024-04-24 16:17:29.971379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.693 [2024-04-24 16:17:29.971405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.693 [2024-04-24 16:17:29.971420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.693 [2024-04-24 16:17:29.971434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.693 [2024-04-24 16:17:29.971478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.693 qpair failed and we were unable to recover it. 00:21:28.952 [2024-04-24 16:17:29.981335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.952 [2024-04-24 16:17:29.981486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.952 [2024-04-24 16:17:29.981510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:29.981525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:29.981538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:29.981568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:29.991307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:29.991484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:29.991509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:29.991524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:29.991538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:29.991568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.001383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.001523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.001550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.001565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.001578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.001608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.011404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.011541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.011570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.011587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.011600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.011658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.021414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.021605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.021633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.021648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.021661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.021692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.031492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.031647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.031675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.031697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.031711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.031750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.041476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.041622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.041651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.041667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.041680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.041710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.051511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.051665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.051692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.051707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.051720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.051759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.061612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.061770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.061797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.061813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.061826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.061856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.071573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.071713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.071739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.071763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.071777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.071808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.081579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.081710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.081736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.081763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.081777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.081808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.091592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.091727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.091761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.091778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.091790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.091820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.101688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.101887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.101913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.101929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.953 [2024-04-24 16:17:30.101942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.953 [2024-04-24 16:17:30.101972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.953 qpair failed and we were unable to recover it. 00:21:28.953 [2024-04-24 16:17:30.111690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.953 [2024-04-24 16:17:30.111832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.953 [2024-04-24 16:17:30.111857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.953 [2024-04-24 16:17:30.111873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.111885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.111915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.121692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.121832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.121864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.121881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.121893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.121923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.131736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.131876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.131905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.131924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.131937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.131968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.141779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.141920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.141948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.141963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.141977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.142007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.151789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.151927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.151953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.151969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.151982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.152024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.161816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.161952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.161977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.161992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.162005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.162041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.171872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.172020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.172049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.172067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.172081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.172126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.181881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.182025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.182052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.182068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.182081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.182110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.191934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.192099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.192126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.192142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.192155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.192184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.201929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.202080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.202107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.202123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.202136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.202181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.211937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.212108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.212141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.212157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.212169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.212200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.222006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.222151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.222177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.222193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.222206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.222235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:28.954 [2024-04-24 16:17:30.232041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:28.954 [2024-04-24 16:17:30.232182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:28.954 [2024-04-24 16:17:30.232208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:28.954 [2024-04-24 16:17:30.232224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:28.954 [2024-04-24 16:17:30.232237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:28.954 [2024-04-24 16:17:30.232267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.954 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.242080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.212 [2024-04-24 16:17:30.242216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.212 [2024-04-24 16:17:30.242242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.212 [2024-04-24 16:17:30.242258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.212 [2024-04-24 16:17:30.242271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:29.212 [2024-04-24 16:17:30.242301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.212 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.252111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.212 [2024-04-24 16:17:30.252247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.212 [2024-04-24 16:17:30.252277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.212 [2024-04-24 16:17:30.252293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.212 [2024-04-24 16:17:30.252307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:29.212 [2024-04-24 16:17:30.252357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.212 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.262124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.212 [2024-04-24 16:17:30.262302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.212 [2024-04-24 16:17:30.262328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.212 [2024-04-24 16:17:30.262344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.212 [2024-04-24 16:17:30.262357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:29.212 [2024-04-24 16:17:30.262399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.212 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.272134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.212 [2024-04-24 16:17:30.272313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.212 [2024-04-24 16:17:30.272356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.212 [2024-04-24 16:17:30.272372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.212 [2024-04-24 16:17:30.272384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:29.212 [2024-04-24 16:17:30.272428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.212 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.282153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.212 [2024-04-24 16:17:30.282282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.212 [2024-04-24 16:17:30.282308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.212 [2024-04-24 16:17:30.282324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.212 [2024-04-24 16:17:30.282336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faafc000b90 00:21:29.212 [2024-04-24 16:17:30.282378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.212 qpair failed and we were unable to recover it. 00:21:29.212 [2024-04-24 16:17:30.282503] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:21:29.212 A controller has encountered a failure and is being reset. 00:21:29.212 [2024-04-24 16:17:30.282558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbb860 (9): Bad file descriptor 00:21:29.212 Controller properly reset. 00:21:29.212 Initializing NVMe Controllers 00:21:29.212 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:29.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:29.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:29.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:29.212 Initialization complete. Launching workers. 00:21:29.212 Starting thread on core 1 00:21:29.212 Starting thread on core 2 00:21:29.212 Starting thread on core 3 00:21:29.212 Starting thread on core 0 00:21:29.212 16:17:30 -- host/target_disconnect.sh@59 -- # sync 00:21:29.212 00:21:29.212 real 0m10.634s 00:21:29.212 user 0m18.016s 00:21:29.212 sys 0m5.293s 00:21:29.212 16:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.212 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.212 ************************************ 00:21:29.212 END TEST nvmf_target_disconnect_tc2 00:21:29.212 ************************************ 00:21:29.212 16:17:30 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:21:29.212 16:17:30 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:29.212 16:17:30 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:29.212 16:17:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:29.212 16:17:30 -- nvmf/common.sh@117 -- # sync 00:21:29.212 16:17:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.212 16:17:30 -- nvmf/common.sh@120 -- # set +e 00:21:29.212 16:17:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.212 16:17:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.212 rmmod nvme_tcp 00:21:29.212 rmmod nvme_fabrics 00:21:29.212 rmmod nvme_keyring 00:21:29.212 16:17:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.212 16:17:30 -- nvmf/common.sh@124 -- # set -e 00:21:29.212 16:17:30 -- nvmf/common.sh@125 -- # return 0 00:21:29.212 16:17:30 -- nvmf/common.sh@478 -- # '[' -n 3479633 ']' 00:21:29.212 16:17:30 -- nvmf/common.sh@479 -- # killprocess 3479633 00:21:29.212 16:17:30 -- common/autotest_common.sh@936 -- # '[' -z 3479633 ']' 00:21:29.212 16:17:30 -- common/autotest_common.sh@940 -- # kill -0 3479633 00:21:29.212 16:17:30 -- common/autotest_common.sh@941 -- # uname 00:21:29.212 16:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.213 16:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3479633 00:21:29.213 16:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:29.213 16:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:29.213 16:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3479633' 00:21:29.213 killing process with pid 3479633 00:21:29.213 16:17:30 -- common/autotest_common.sh@955 -- # kill 3479633 00:21:29.213 16:17:30 -- common/autotest_common.sh@960 -- # wait 3479633 00:21:29.471 16:17:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:29.471 16:17:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:29.471 16:17:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:29.471 16:17:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.471 16:17:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.471 16:17:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.471 16:17:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.471 16:17:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.034 16:17:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.034 00:21:32.034 real 0m15.598s 00:21:32.034 user 0m43.517s 00:21:32.034 sys 0m7.371s 00:21:32.034 16:17:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 ************************************ 00:21:32.034 END TEST nvmf_target_disconnect 00:21:32.034 ************************************ 00:21:32.034 16:17:32 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:32.034 16:17:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 16:17:32 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:32.034 00:21:32.034 real 15m15.684s 00:21:32.034 user 35m16.971s 00:21:32.034 sys 4m12.506s 00:21:32.034 16:17:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 ************************************ 00:21:32.034 END TEST nvmf_tcp 00:21:32.034 ************************************ 00:21:32.034 16:17:32 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:21:32.034 16:17:32 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:32.034 16:17:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:32.034 16:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 ************************************ 00:21:32.034 START TEST spdkcli_nvmf_tcp 00:21:32.034 ************************************ 00:21:32.034 16:17:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:32.034 * Looking for test storage... 00:21:32.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:21:32.034 16:17:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:32.034 16:17:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.034 16:17:32 -- nvmf/common.sh@7 -- # uname -s 00:21:32.034 16:17:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.034 16:17:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.034 16:17:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.034 16:17:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.034 16:17:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.034 16:17:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.034 16:17:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.034 16:17:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.034 16:17:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.034 16:17:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.034 16:17:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.034 16:17:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.034 16:17:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.034 16:17:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.034 16:17:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.034 16:17:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.034 16:17:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.034 16:17:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.034 16:17:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.034 16:17:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.034 16:17:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.034 16:17:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.034 16:17:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.034 16:17:32 -- paths/export.sh@5 -- # export PATH 00:21:32.034 16:17:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.034 16:17:32 -- nvmf/common.sh@47 -- # : 0 00:21:32.034 16:17:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.034 16:17:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.034 16:17:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.034 16:17:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.034 16:17:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.034 16:17:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.034 16:17:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.034 16:17:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:32.034 16:17:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 16:17:32 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:32.034 16:17:32 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3480832 00:21:32.034 16:17:32 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:32.034 16:17:32 -- spdkcli/common.sh@34 -- # waitforlisten 3480832 00:21:32.034 16:17:32 -- common/autotest_common.sh@817 -- # '[' -z 3480832 ']' 00:21:32.034 16:17:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.034 16:17:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.034 16:17:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.034 16:17:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.034 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.034 [2024-04-24 16:17:33.018025] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:21:32.034 [2024-04-24 16:17:33.018135] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3480832 ] 00:21:32.034 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.034 [2024-04-24 16:17:33.075538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:32.034 [2024-04-24 16:17:33.178212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.034 [2024-04-24 16:17:33.178218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.034 16:17:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.034 16:17:33 -- common/autotest_common.sh@850 -- # return 0 00:21:32.034 16:17:33 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:32.034 16:17:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:32.034 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:32.295 16:17:33 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:32.295 16:17:33 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:32.295 16:17:33 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:32.295 16:17:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.295 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:32.295 16:17:33 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:32.295 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:32.295 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:32.295 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:32.295 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:32.295 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:32.295 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:32.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:32.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:32.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:32.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:32.295 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:32.295 ' 00:21:32.555 [2024-04-24 16:17:33.703809] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:34.589 [2024-04-24 16:17:35.873294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.965 [2024-04-24 16:17:37.113636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:38.500 [2024-04-24 16:17:39.380726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:40.404 [2024-04-24 16:17:41.351220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:41.784 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:41.784 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:41.784 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:41.784 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:41.784 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:41.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:41.784 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:41.784 16:17:42 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:41.784 16:17:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.784 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:21:41.784 16:17:42 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:41.784 16:17:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:41.784 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:21:41.784 16:17:42 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:41.784 16:17:42 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:42.351 16:17:43 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:42.351 16:17:43 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:42.351 16:17:43 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:42.351 16:17:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.351 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:21:42.351 16:17:43 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:42.351 16:17:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:42.351 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:21:42.351 16:17:43 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:42.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:42.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:42.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:42.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:42.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:42.352 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:42.352 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:42.352 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:42.352 ' 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:47.627 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:47.627 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:47.627 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:47.627 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:47.627 16:17:48 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:47.627 16:17:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.627 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.627 16:17:48 -- spdkcli/nvmf.sh@90 -- # killprocess 3480832 00:21:47.627 16:17:48 -- common/autotest_common.sh@936 -- # '[' -z 3480832 ']' 00:21:47.627 16:17:48 -- common/autotest_common.sh@940 -- # kill -0 3480832 00:21:47.627 16:17:48 -- common/autotest_common.sh@941 -- # uname 00:21:47.627 16:17:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:47.627 16:17:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3480832 00:21:47.627 16:17:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:47.627 16:17:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:47.627 16:17:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3480832' 00:21:47.627 killing process with pid 3480832 00:21:47.627 16:17:48 -- common/autotest_common.sh@955 -- # kill 3480832 00:21:47.627 [2024-04-24 16:17:48.706687] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:47.627 16:17:48 -- common/autotest_common.sh@960 -- # wait 3480832 00:21:47.886 16:17:48 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:47.886 16:17:48 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:47.886 16:17:48 -- spdkcli/common.sh@13 -- # '[' -n 3480832 ']' 00:21:47.886 16:17:48 -- spdkcli/common.sh@14 -- # killprocess 3480832 00:21:47.886 16:17:48 -- common/autotest_common.sh@936 -- # '[' -z 3480832 ']' 00:21:47.886 16:17:48 -- common/autotest_common.sh@940 -- # kill -0 3480832 00:21:47.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3480832) - No such process 00:21:47.886 16:17:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3480832 is not found' 00:21:47.886 Process with pid 3480832 is not found 00:21:47.886 16:17:48 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:47.886 16:17:48 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:47.886 16:17:48 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:47.886 00:21:47.886 real 0m16.082s 00:21:47.886 user 0m33.939s 00:21:47.886 sys 0m0.785s 00:21:47.886 16:17:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.886 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.886 ************************************ 00:21:47.886 END TEST spdkcli_nvmf_tcp 00:21:47.886 ************************************ 00:21:47.886 16:17:49 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:47.886 16:17:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.886 16:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.886 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.886 ************************************ 00:21:47.886 START TEST nvmf_identify_passthru 00:21:47.886 ************************************ 00:21:47.886 16:17:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:47.886 * Looking for test storage... 00:21:47.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.886 16:17:49 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.886 16:17:49 -- nvmf/common.sh@7 -- # uname -s 00:21:47.886 16:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.886 16:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.886 16:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.886 16:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.886 16:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.886 16:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.886 16:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.886 16:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.886 16:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.886 16:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.147 16:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.147 16:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.147 16:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.147 16:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.147 16:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.147 16:17:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.147 16:17:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.147 16:17:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.147 16:17:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.147 16:17:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.147 16:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@5 -- # export PATH 00:21:48.147 16:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- nvmf/common.sh@47 -- # : 0 00:21:48.147 16:17:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.147 16:17:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.147 16:17:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.147 16:17:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.147 16:17:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.147 16:17:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.147 16:17:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.147 16:17:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.147 16:17:49 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.147 16:17:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.147 16:17:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.147 16:17:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.147 16:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- paths/export.sh@5 -- # export PATH 00:21:48.147 16:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.147 16:17:49 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:48.147 16:17:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:48.147 16:17:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.147 16:17:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:48.147 16:17:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:48.147 16:17:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:48.147 16:17:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.147 16:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:48.147 16:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.147 16:17:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:48.147 16:17:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:48.147 16:17:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.147 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.053 16:17:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:50.053 16:17:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.053 16:17:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.053 16:17:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.053 16:17:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.053 16:17:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.053 16:17:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.053 16:17:51 -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.053 16:17:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.053 16:17:51 -- nvmf/common.sh@296 -- # e810=() 00:21:50.053 16:17:51 -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.053 16:17:51 -- nvmf/common.sh@297 -- # x722=() 00:21:50.054 16:17:51 -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.054 16:17:51 -- nvmf/common.sh@298 -- # mlx=() 00:21:50.054 16:17:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.054 16:17:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.054 16:17:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.054 16:17:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:50.054 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:50.054 16:17:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.054 16:17:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:50.054 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:50.054 16:17:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.054 16:17:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.054 16:17:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.054 16:17:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:50.054 Found net devices under 0000:09:00.0: cvl_0_0 00:21:50.054 16:17:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.054 16:17:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.054 16:17:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.054 16:17:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:50.054 Found net devices under 0000:09:00.1: cvl_0_1 00:21:50.054 16:17:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:50.054 16:17:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:50.054 16:17:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.054 16:17:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.054 16:17:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:50.054 16:17:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.054 16:17:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.054 16:17:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:50.054 16:17:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.054 16:17:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.054 16:17:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:50.054 16:17:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:50.054 16:17:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.054 16:17:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.054 16:17:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.054 16:17:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.054 16:17:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:50.054 16:17:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.054 16:17:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.054 16:17:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.054 16:17:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:50.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:21:50.054 00:21:50.054 --- 10.0.0.2 ping statistics --- 00:21:50.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.054 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:50.054 16:17:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:21:50.054 00:21:50.054 --- 10.0.0.1 ping statistics --- 00:21:50.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.054 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:50.054 16:17:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.054 16:17:51 -- nvmf/common.sh@411 -- # return 0 00:21:50.054 16:17:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:50.054 16:17:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.054 16:17:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:50.054 16:17:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.054 16:17:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:50.054 16:17:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:50.054 16:17:51 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:50.054 16:17:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:50.054 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:21:50.054 16:17:51 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:50.054 16:17:51 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:50.054 16:17:51 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:50.054 16:17:51 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:50.054 16:17:51 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:50.054 16:17:51 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:50.054 16:17:51 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:50.054 16:17:51 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:50.054 16:17:51 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:50.054 16:17:51 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:50.054 16:17:51 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:50.054 16:17:51 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:0b:00.0 00:21:50.054 16:17:51 -- common/autotest_common.sh@1513 -- # echo 0000:0b:00.0 00:21:50.054 16:17:51 -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:21:50.054 16:17:51 -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:21:50.054 16:17:51 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:21:50.054 16:17:51 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:50.054 16:17:51 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:50.054 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.248 16:17:55 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:21:54.248 16:17:55 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:21:54.248 16:17:55 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:54.248 16:17:55 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:54.248 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.444 16:17:59 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:58.444 16:17:59 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:58.444 16:17:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.444 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.444 16:17:59 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:58.444 16:17:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:58.444 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.444 16:17:59 -- target/identify_passthru.sh@31 -- # nvmfpid=3485466 00:21:58.444 16:17:59 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:58.444 16:17:59 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.444 16:17:59 -- target/identify_passthru.sh@35 -- # waitforlisten 3485466 00:21:58.444 16:17:59 -- common/autotest_common.sh@817 -- # '[' -z 3485466 ']' 00:21:58.444 16:17:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.444 16:17:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:58.444 16:17:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.444 16:17:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:58.444 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.444 [2024-04-24 16:17:59.576014] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:21:58.444 [2024-04-24 16:17:59.576127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.444 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.444 [2024-04-24 16:17:59.649780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.703 [2024-04-24 16:17:59.767560] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.703 [2024-04-24 16:17:59.767628] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.703 [2024-04-24 16:17:59.767644] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.703 [2024-04-24 16:17:59.767658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.703 [2024-04-24 16:17:59.767670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.703 [2024-04-24 16:17:59.767736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.703 [2024-04-24 16:17:59.767795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.703 [2024-04-24 16:17:59.767834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.703 [2024-04-24 16:17:59.767841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.703 16:17:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:58.703 16:17:59 -- common/autotest_common.sh@850 -- # return 0 00:21:58.703 16:17:59 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:58.703 16:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.703 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.703 INFO: Log level set to 20 00:21:58.703 INFO: Requests: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "method": "nvmf_set_config", 00:21:58.703 "id": 1, 00:21:58.703 "params": { 00:21:58.703 "admin_cmd_passthru": { 00:21:58.703 "identify_ctrlr": true 00:21:58.703 } 00:21:58.703 } 00:21:58.703 } 00:21:58.703 00:21:58.703 INFO: response: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "id": 1, 00:21:58.703 "result": true 00:21:58.703 } 00:21:58.703 00:21:58.703 16:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.703 16:17:59 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:58.703 16:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.703 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.703 INFO: Setting log level to 20 00:21:58.703 INFO: Setting log level to 20 00:21:58.703 INFO: Log level set to 20 00:21:58.703 INFO: Log level set to 20 00:21:58.703 INFO: Requests: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "method": "framework_start_init", 00:21:58.703 "id": 1 00:21:58.703 } 00:21:58.703 00:21:58.703 INFO: Requests: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "method": "framework_start_init", 00:21:58.703 "id": 1 00:21:58.703 } 00:21:58.703 00:21:58.703 [2024-04-24 16:17:59.935099] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:58.703 INFO: response: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "id": 1, 00:21:58.703 "result": true 00:21:58.703 } 00:21:58.703 00:21:58.703 INFO: response: 00:21:58.703 { 00:21:58.703 "jsonrpc": "2.0", 00:21:58.703 "id": 1, 00:21:58.703 "result": true 00:21:58.703 } 00:21:58.703 00:21:58.703 16:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.703 16:17:59 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.703 16:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.703 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.703 INFO: Setting log level to 40 00:21:58.703 INFO: Setting log level to 40 00:21:58.703 INFO: Setting log level to 40 00:21:58.703 [2024-04-24 16:17:59.945243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.703 16:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.703 16:17:59 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:58.703 16:17:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.703 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.703 16:17:59 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:21:58.703 16:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.703 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 Nvme0n1 00:22:01.989 16:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:02 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:01.989 16:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.989 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 16:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:02 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:01.989 16:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.989 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 16:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:02 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.989 16:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.989 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 [2024-04-24 16:18:02.841319] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.989 16:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:02 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:01.989 16:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.989 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 [2024-04-24 16:18:02.849041] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:01.989 [ 00:22:01.989 { 00:22:01.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:01.989 "subtype": "Discovery", 00:22:01.989 "listen_addresses": [], 00:22:01.989 "allow_any_host": true, 00:22:01.989 "hosts": [] 00:22:01.989 }, 00:22:01.989 { 00:22:01.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.989 "subtype": "NVMe", 00:22:01.989 "listen_addresses": [ 00:22:01.989 { 00:22:01.989 "transport": "TCP", 00:22:01.989 "trtype": "TCP", 00:22:01.989 "adrfam": "IPv4", 00:22:01.989 "traddr": "10.0.0.2", 00:22:01.989 "trsvcid": "4420" 00:22:01.989 } 00:22:01.989 ], 00:22:01.989 "allow_any_host": true, 00:22:01.989 "hosts": [], 00:22:01.989 "serial_number": "SPDK00000000000001", 00:22:01.989 "model_number": "SPDK bdev Controller", 00:22:01.989 "max_namespaces": 1, 00:22:01.989 "min_cntlid": 1, 00:22:01.989 "max_cntlid": 65519, 00:22:01.989 "namespaces": [ 00:22:01.989 { 00:22:01.989 "nsid": 1, 00:22:01.989 "bdev_name": "Nvme0n1", 00:22:01.989 "name": "Nvme0n1", 00:22:01.989 "nguid": "06296629169144FCB4EDF20F6FB90892", 00:22:01.989 "uuid": "06296629-1691-44fc-b4ed-f20f6fb90892" 00:22:01.989 } 00:22:01.989 ] 00:22:01.989 } 00:22:01.989 ] 00:22:01.989 16:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:02 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:01.989 16:18:02 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:01.989 16:18:02 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:01.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.989 16:18:03 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:22:01.989 16:18:03 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:01.989 16:18:03 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:01.989 16:18:03 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:01.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.989 16:18:03 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:22:01.989 16:18:03 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:22:01.989 16:18:03 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:22:01.989 16:18:03 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.989 16:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.989 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.989 16:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.989 16:18:03 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:01.989 16:18:03 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:01.989 16:18:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:01.989 16:18:03 -- nvmf/common.sh@117 -- # sync 00:22:01.989 16:18:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.989 16:18:03 -- nvmf/common.sh@120 -- # set +e 00:22:01.989 16:18:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.989 16:18:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.989 rmmod nvme_tcp 00:22:01.989 rmmod nvme_fabrics 00:22:01.989 rmmod nvme_keyring 00:22:01.989 16:18:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.989 16:18:03 -- nvmf/common.sh@124 -- # set -e 00:22:01.989 16:18:03 -- nvmf/common.sh@125 -- # return 0 00:22:01.989 16:18:03 -- nvmf/common.sh@478 -- # '[' -n 3485466 ']' 00:22:01.989 16:18:03 -- nvmf/common.sh@479 -- # killprocess 3485466 00:22:01.989 16:18:03 -- common/autotest_common.sh@936 -- # '[' -z 3485466 ']' 00:22:01.989 16:18:03 -- common/autotest_common.sh@940 -- # kill -0 3485466 00:22:01.989 16:18:03 -- common/autotest_common.sh@941 -- # uname 00:22:01.989 16:18:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:01.989 16:18:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3485466 00:22:01.989 16:18:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:01.989 16:18:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:01.989 16:18:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3485466' 00:22:01.989 killing process with pid 3485466 00:22:01.989 16:18:03 -- common/autotest_common.sh@955 -- # kill 3485466 00:22:01.989 [2024-04-24 16:18:03.207847] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:01.989 16:18:03 -- common/autotest_common.sh@960 -- # wait 3485466 00:22:03.896 16:18:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:03.896 16:18:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:03.896 16:18:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:03.896 16:18:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.896 16:18:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.896 16:18:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.896 16:18:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:03.896 16:18:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.804 16:18:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.804 00:22:05.804 real 0m17.705s 00:22:05.804 user 0m26.251s 00:22:05.804 sys 0m2.162s 00:22:05.804 16:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:05.804 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.804 ************************************ 00:22:05.804 END TEST nvmf_identify_passthru 00:22:05.804 ************************************ 00:22:05.804 16:18:06 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:05.804 16:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:05.804 16:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:05.804 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.804 ************************************ 00:22:05.804 START TEST nvmf_dif 00:22:05.804 ************************************ 00:22:05.804 16:18:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:05.804 * Looking for test storage... 00:22:05.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:05.804 16:18:06 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.804 16:18:06 -- nvmf/common.sh@7 -- # uname -s 00:22:05.804 16:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.804 16:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.804 16:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.804 16:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.804 16:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.804 16:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.804 16:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.804 16:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.804 16:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.804 16:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.804 16:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:05.804 16:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:05.804 16:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.804 16:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.804 16:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.804 16:18:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.804 16:18:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.804 16:18:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.804 16:18:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.804 16:18:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.804 16:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.804 16:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.804 16:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.804 16:18:06 -- paths/export.sh@5 -- # export PATH 00:22:05.804 16:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.804 16:18:06 -- nvmf/common.sh@47 -- # : 0 00:22:05.804 16:18:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.804 16:18:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.804 16:18:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.804 16:18:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.805 16:18:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.805 16:18:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.805 16:18:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.805 16:18:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.805 16:18:07 -- target/dif.sh@15 -- # NULL_META=16 00:22:05.805 16:18:07 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:05.805 16:18:07 -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:05.805 16:18:07 -- target/dif.sh@15 -- # NULL_DIF=1 00:22:05.805 16:18:07 -- target/dif.sh@135 -- # nvmftestinit 00:22:05.805 16:18:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:05.805 16:18:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.805 16:18:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:05.805 16:18:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:05.805 16:18:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:05.805 16:18:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.805 16:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:05.805 16:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.805 16:18:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:05.805 16:18:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:05.805 16:18:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.805 16:18:07 -- common/autotest_common.sh@10 -- # set +x 00:22:07.801 16:18:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.801 16:18:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.801 16:18:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.801 16:18:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.801 16:18:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.801 16:18:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.801 16:18:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.801 16:18:08 -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.801 16:18:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.801 16:18:08 -- nvmf/common.sh@296 -- # e810=() 00:22:07.801 16:18:08 -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.801 16:18:08 -- nvmf/common.sh@297 -- # x722=() 00:22:07.801 16:18:08 -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.801 16:18:08 -- nvmf/common.sh@298 -- # mlx=() 00:22:07.801 16:18:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.801 16:18:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.801 16:18:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.801 16:18:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.801 16:18:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.801 16:18:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:07.801 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:07.801 16:18:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.801 16:18:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:07.801 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:07.801 16:18:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.801 16:18:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.801 16:18:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.801 16:18:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:07.801 Found net devices under 0000:09:00.0: cvl_0_0 00:22:07.801 16:18:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.801 16:18:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.801 16:18:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.801 16:18:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.801 16:18:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:07.801 Found net devices under 0000:09:00.1: cvl_0_1 00:22:07.801 16:18:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.801 16:18:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:07.801 16:18:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:07.801 16:18:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:07.801 16:18:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.801 16:18:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.801 16:18:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.801 16:18:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.801 16:18:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.802 16:18:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.802 16:18:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.802 16:18:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.802 16:18:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.802 16:18:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.802 16:18:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.802 16:18:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.802 16:18:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.802 16:18:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.802 16:18:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.802 16:18:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.802 16:18:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.802 16:18:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.802 16:18:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.802 16:18:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:22:07.802 00:22:07.802 --- 10.0.0.2 ping statistics --- 00:22:07.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.802 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:07.802 16:18:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:07.802 00:22:07.802 --- 10.0.0.1 ping statistics --- 00:22:07.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.802 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:07.802 16:18:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.802 16:18:08 -- nvmf/common.sh@411 -- # return 0 00:22:07.802 16:18:08 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:07.802 16:18:08 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:08.734 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:08.734 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:08.734 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:08.734 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:08.734 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:08.734 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:08.734 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:08.734 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:08.734 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:08.734 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:08.734 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:08.734 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:08.734 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:08.734 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:08.734 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:08.734 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:08.734 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:08.993 16:18:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.993 16:18:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:08.993 16:18:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:08.993 16:18:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.993 16:18:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:08.993 16:18:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:08.993 16:18:10 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:08.993 16:18:10 -- target/dif.sh@137 -- # nvmfappstart 00:22:08.993 16:18:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:08.993 16:18:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:08.993 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:08.993 16:18:10 -- nvmf/common.sh@470 -- # nvmfpid=3489243 00:22:08.993 16:18:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.993 16:18:10 -- nvmf/common.sh@471 -- # waitforlisten 3489243 00:22:08.993 16:18:10 -- common/autotest_common.sh@817 -- # '[' -z 3489243 ']' 00:22:08.993 16:18:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.993 16:18:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:08.993 16:18:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.993 16:18:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:08.993 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:08.993 [2024-04-24 16:18:10.139815] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:22:08.993 [2024-04-24 16:18:10.139888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.993 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.993 [2024-04-24 16:18:10.206417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.251 [2024-04-24 16:18:10.312624] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.251 [2024-04-24 16:18:10.312683] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.251 [2024-04-24 16:18:10.312697] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.251 [2024-04-24 16:18:10.312708] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.251 [2024-04-24 16:18:10.312718] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.251 [2024-04-24 16:18:10.312780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.251 16:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.251 16:18:10 -- common/autotest_common.sh@850 -- # return 0 00:22:09.251 16:18:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:09.251 16:18:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.251 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.251 16:18:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.251 16:18:10 -- target/dif.sh@139 -- # create_transport 00:22:09.251 16:18:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:09.251 16:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.251 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.251 [2024-04-24 16:18:10.461514] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.251 16:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.251 16:18:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:09.251 16:18:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:09.251 16:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:09.251 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.509 ************************************ 00:22:09.509 START TEST fio_dif_1_default 00:22:09.509 ************************************ 00:22:09.509 16:18:10 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:22:09.509 16:18:10 -- target/dif.sh@86 -- # create_subsystems 0 00:22:09.509 16:18:10 -- target/dif.sh@28 -- # local sub 00:22:09.509 16:18:10 -- target/dif.sh@30 -- # for sub in "$@" 00:22:09.509 16:18:10 -- target/dif.sh@31 -- # create_subsystem 0 00:22:09.509 16:18:10 -- target/dif.sh@18 -- # local sub_id=0 00:22:09.509 16:18:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:09.509 16:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.509 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.509 bdev_null0 00:22:09.509 16:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.509 16:18:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:09.509 16:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.509 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.509 16:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.509 16:18:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:09.509 16:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.509 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.509 16:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.509 16:18:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.509 16:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.509 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.509 [2024-04-24 16:18:10.590038] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.509 16:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.509 16:18:10 -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:09.509 16:18:10 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:09.509 16:18:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:09.509 16:18:10 -- nvmf/common.sh@521 -- # config=() 00:22:09.509 16:18:10 -- nvmf/common.sh@521 -- # local subsystem config 00:22:09.509 16:18:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.509 16:18:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.509 { 00:22:09.509 "params": { 00:22:09.509 "name": "Nvme$subsystem", 00:22:09.509 "trtype": "$TEST_TRANSPORT", 00:22:09.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.509 "adrfam": "ipv4", 00:22:09.509 "trsvcid": "$NVMF_PORT", 00:22:09.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.509 "hdgst": ${hdgst:-false}, 00:22:09.509 "ddgst": ${ddgst:-false} 00:22:09.509 }, 00:22:09.509 "method": "bdev_nvme_attach_controller" 00:22:09.509 } 00:22:09.509 EOF 00:22:09.509 )") 00:22:09.509 16:18:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.509 16:18:10 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.509 16:18:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:09.509 16:18:10 -- target/dif.sh@82 -- # gen_fio_conf 00:22:09.509 16:18:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.509 16:18:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:09.509 16:18:10 -- target/dif.sh@54 -- # local file 00:22:09.509 16:18:10 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.509 16:18:10 -- target/dif.sh@56 -- # cat 00:22:09.509 16:18:10 -- common/autotest_common.sh@1327 -- # shift 00:22:09.509 16:18:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:09.509 16:18:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.510 16:18:10 -- nvmf/common.sh@543 -- # cat 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:09.510 16:18:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:09.510 16:18:10 -- target/dif.sh@72 -- # (( file <= files )) 00:22:09.510 16:18:10 -- nvmf/common.sh@545 -- # jq . 00:22:09.510 16:18:10 -- nvmf/common.sh@546 -- # IFS=, 00:22:09.510 16:18:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:09.510 "params": { 00:22:09.510 "name": "Nvme0", 00:22:09.510 "trtype": "tcp", 00:22:09.510 "traddr": "10.0.0.2", 00:22:09.510 "adrfam": "ipv4", 00:22:09.510 "trsvcid": "4420", 00:22:09.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:09.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:09.510 "hdgst": false, 00:22:09.510 "ddgst": false 00:22:09.510 }, 00:22:09.510 "method": "bdev_nvme_attach_controller" 00:22:09.510 }' 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:09.510 16:18:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:09.510 16:18:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:09.510 16:18:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:09.510 16:18:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:09.510 16:18:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:09.510 16:18:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.767 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:09.767 fio-3.35 00:22:09.767 Starting 1 thread 00:22:09.767 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.958 00:22:21.958 filename0: (groupid=0, jobs=1): err= 0: pid=3489479: Wed Apr 24 16:18:21 2024 00:22:21.958 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10010msec) 00:22:21.958 slat (nsec): min=6530, max=60998, avg=8503.74, stdev=3150.07 00:22:21.958 clat (usec): min=725, max=44504, avg=21090.84, stdev=20131.71 00:22:21.958 lat (usec): min=732, max=44532, avg=21099.35, stdev=20131.56 00:22:21.958 clat percentiles (usec): 00:22:21.958 | 1.00th=[ 832], 5.00th=[ 848], 10.00th=[ 857], 20.00th=[ 865], 00:22:21.958 | 30.00th=[ 881], 40.00th=[ 898], 50.00th=[40633], 60.00th=[41157], 00:22:21.958 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:21.958 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:22:21.958 | 99.99th=[44303] 00:22:21.958 bw ( KiB/s): min= 672, max= 768, per=99.78%, avg=756.80, stdev=26.01, samples=20 00:22:21.958 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:22:21.958 lat (usec) : 750=0.21%, 1000=48.63% 00:22:21.958 lat (msec) : 2=0.95%, 50=50.21% 00:22:21.958 cpu : usr=89.08%, sys=10.65%, ctx=19, majf=0, minf=268 00:22:21.958 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.958 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.958 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:21.958 00:22:21.958 Run status group 0 (all jobs): 00:22:21.958 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10010-10010msec 00:22:21.958 16:18:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:21.958 16:18:21 -- target/dif.sh@43 -- # local sub 00:22:21.958 16:18:21 -- target/dif.sh@45 -- # for sub in "$@" 00:22:21.958 16:18:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:21.958 16:18:21 -- target/dif.sh@36 -- # local sub_id=0 00:22:21.958 16:18:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 00:22:21.958 real 0m11.222s 00:22:21.958 user 0m10.242s 00:22:21.958 sys 0m1.350s 00:22:21.958 16:18:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 ************************************ 00:22:21.958 END TEST fio_dif_1_default 00:22:21.958 ************************************ 00:22:21.958 16:18:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:21.958 16:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:21.958 16:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 ************************************ 00:22:21.958 START TEST fio_dif_1_multi_subsystems 00:22:21.958 ************************************ 00:22:21.958 16:18:21 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:22:21.958 16:18:21 -- target/dif.sh@92 -- # local files=1 00:22:21.958 16:18:21 -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:21.958 16:18:21 -- target/dif.sh@28 -- # local sub 00:22:21.958 16:18:21 -- target/dif.sh@30 -- # for sub in "$@" 00:22:21.958 16:18:21 -- target/dif.sh@31 -- # create_subsystem 0 00:22:21.958 16:18:21 -- target/dif.sh@18 -- # local sub_id=0 00:22:21.958 16:18:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 bdev_null0 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 [2024-04-24 16:18:21.948983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@30 -- # for sub in "$@" 00:22:21.958 16:18:21 -- target/dif.sh@31 -- # create_subsystem 1 00:22:21.958 16:18:21 -- target/dif.sh@18 -- # local sub_id=1 00:22:21.958 16:18:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.958 bdev_null1 00:22:21.958 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.958 16:18:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:21.958 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.958 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.959 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.959 16:18:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:21.959 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.959 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.959 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.959 16:18:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.959 16:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.959 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.959 16:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.959 16:18:21 -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:21.959 16:18:21 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:21.959 16:18:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:21.959 16:18:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:21.959 16:18:21 -- nvmf/common.sh@521 -- # config=() 00:22:21.959 16:18:21 -- nvmf/common.sh@521 -- # local subsystem config 00:22:21.959 16:18:21 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:21.959 16:18:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:21.959 16:18:21 -- target/dif.sh@82 -- # gen_fio_conf 00:22:21.959 16:18:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:21.959 16:18:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:21.959 { 00:22:21.959 "params": { 00:22:21.959 "name": "Nvme$subsystem", 00:22:21.959 "trtype": "$TEST_TRANSPORT", 00:22:21.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.959 "adrfam": "ipv4", 00:22:21.959 "trsvcid": "$NVMF_PORT", 00:22:21.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.959 "hdgst": ${hdgst:-false}, 00:22:21.959 "ddgst": ${ddgst:-false} 00:22:21.959 }, 00:22:21.959 "method": "bdev_nvme_attach_controller" 00:22:21.959 } 00:22:21.959 EOF 00:22:21.959 )") 00:22:21.959 16:18:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:21.959 16:18:21 -- target/dif.sh@54 -- # local file 00:22:21.959 16:18:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:21.959 16:18:21 -- target/dif.sh@56 -- # cat 00:22:21.959 16:18:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:21.959 16:18:21 -- common/autotest_common.sh@1327 -- # shift 00:22:21.959 16:18:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:21.959 16:18:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.959 16:18:21 -- nvmf/common.sh@543 -- # cat 00:22:21.959 16:18:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:21.959 16:18:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:21.959 16:18:21 -- target/dif.sh@72 -- # (( file <= files )) 00:22:21.959 16:18:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:21.959 16:18:21 -- target/dif.sh@73 -- # cat 00:22:21.959 16:18:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:21.959 16:18:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:21.959 16:18:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:21.959 { 00:22:21.959 "params": { 00:22:21.959 "name": "Nvme$subsystem", 00:22:21.959 "trtype": "$TEST_TRANSPORT", 00:22:21.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.959 "adrfam": "ipv4", 00:22:21.959 "trsvcid": "$NVMF_PORT", 00:22:21.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.959 "hdgst": ${hdgst:-false}, 00:22:21.959 "ddgst": ${ddgst:-false} 00:22:21.959 }, 00:22:21.959 "method": "bdev_nvme_attach_controller" 00:22:21.959 } 00:22:21.959 EOF 00:22:21.959 )") 00:22:21.959 16:18:21 -- target/dif.sh@72 -- # (( file++ )) 00:22:21.959 16:18:21 -- target/dif.sh@72 -- # (( file <= files )) 00:22:21.959 16:18:21 -- nvmf/common.sh@543 -- # cat 00:22:21.959 16:18:21 -- nvmf/common.sh@545 -- # jq . 00:22:21.959 16:18:21 -- nvmf/common.sh@546 -- # IFS=, 00:22:21.959 16:18:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:21.959 "params": { 00:22:21.959 "name": "Nvme0", 00:22:21.959 "trtype": "tcp", 00:22:21.959 "traddr": "10.0.0.2", 00:22:21.959 "adrfam": "ipv4", 00:22:21.959 "trsvcid": "4420", 00:22:21.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:21.959 "hdgst": false, 00:22:21.959 "ddgst": false 00:22:21.959 }, 00:22:21.959 "method": "bdev_nvme_attach_controller" 00:22:21.959 },{ 00:22:21.959 "params": { 00:22:21.959 "name": "Nvme1", 00:22:21.959 "trtype": "tcp", 00:22:21.959 "traddr": "10.0.0.2", 00:22:21.959 "adrfam": "ipv4", 00:22:21.959 "trsvcid": "4420", 00:22:21.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.959 "hdgst": false, 00:22:21.959 "ddgst": false 00:22:21.959 }, 00:22:21.959 "method": "bdev_nvme_attach_controller" 00:22:21.959 }' 00:22:21.959 16:18:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:21.959 16:18:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:21.959 16:18:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.959 16:18:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:21.959 16:18:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:21.959 16:18:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:21.959 16:18:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:21.959 16:18:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:21.959 16:18:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:21.959 16:18:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:21.959 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:21.959 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:21.959 fio-3.35 00:22:21.959 Starting 2 threads 00:22:21.959 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.920 00:22:31.920 filename0: (groupid=0, jobs=1): err= 0: pid=3490890: Wed Apr 24 16:18:33 2024 00:22:31.920 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10029msec) 00:22:31.920 slat (usec): min=4, max=105, avg= 9.72, stdev= 4.17 00:22:31.920 clat (usec): min=40871, max=44398, avg=41581.44, stdev=521.64 00:22:31.920 lat (usec): min=40879, max=44421, avg=41591.16, stdev=521.92 00:22:31.920 clat percentiles (usec): 00:22:31.920 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:31.920 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:22:31.920 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:31.920 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:22:31.920 | 99.99th=[44303] 00:22:31.920 bw ( KiB/s): min= 352, max= 416, per=33.62%, avg=384.00, stdev=14.68, samples=20 00:22:31.920 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:22:31.920 lat (msec) : 50=100.00% 00:22:31.920 cpu : usr=94.19%, sys=5.53%, ctx=19, majf=0, minf=70 00:22:31.920 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.920 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.920 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:31.920 filename1: (groupid=0, jobs=1): err= 0: pid=3490891: Wed Apr 24 16:18:33 2024 00:22:31.920 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:22:31.920 slat (nsec): min=4498, max=92267, avg=9422.24, stdev=3051.63 00:22:31.920 clat (usec): min=698, max=44325, avg=21026.27, stdev=20113.76 00:22:31.920 lat (usec): min=706, max=44336, avg=21035.69, stdev=20113.64 00:22:31.920 clat percentiles (usec): 00:22:31.920 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 840], 20.00th=[ 857], 00:22:31.920 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:22:31.920 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:31.920 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:22:31.920 | 99.99th=[44303] 00:22:31.920 bw ( KiB/s): min= 704, max= 768, per=66.62%, avg=761.26, stdev=17.13, samples=19 00:22:31.920 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:22:31.920 lat (usec) : 750=0.26%, 1000=49.47% 00:22:31.920 lat (msec) : 2=0.16%, 50=50.11% 00:22:31.920 cpu : usr=94.03%, sys=5.58%, ctx=69, majf=0, minf=207 00:22:31.920 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.920 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.920 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:31.920 00:22:31.920 Run status group 0 (all jobs): 00:22:31.920 READ: bw=1142KiB/s (1170kB/s), 384KiB/s-760KiB/s (394kB/s-778kB/s), io=11.2MiB (11.7MB), run=10002-10029msec 00:22:32.179 16:18:33 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:32.179 16:18:33 -- target/dif.sh@43 -- # local sub 00:22:32.179 16:18:33 -- target/dif.sh@45 -- # for sub in "$@" 00:22:32.179 16:18:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:32.179 16:18:33 -- target/dif.sh@36 -- # local sub_id=0 00:22:32.179 16:18:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 16:18:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 16:18:33 -- target/dif.sh@45 -- # for sub in "$@" 00:22:32.179 16:18:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:32.179 16:18:33 -- target/dif.sh@36 -- # local sub_id=1 00:22:32.179 16:18:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 16:18:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 00:22:32.179 real 0m11.398s 00:22:32.179 user 0m20.214s 00:22:32.179 sys 0m1.466s 00:22:32.179 16:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 ************************************ 00:22:32.179 END TEST fio_dif_1_multi_subsystems 00:22:32.179 ************************************ 00:22:32.179 16:18:33 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:32.179 16:18:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:32.179 16:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 ************************************ 00:22:32.179 START TEST fio_dif_rand_params 00:22:32.179 ************************************ 00:22:32.179 16:18:33 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:22:32.179 16:18:33 -- target/dif.sh@100 -- # local NULL_DIF 00:22:32.179 16:18:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:32.179 16:18:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:22:32.179 16:18:33 -- target/dif.sh@103 -- # bs=128k 00:22:32.179 16:18:33 -- target/dif.sh@103 -- # numjobs=3 00:22:32.179 16:18:33 -- target/dif.sh@103 -- # iodepth=3 00:22:32.179 16:18:33 -- target/dif.sh@103 -- # runtime=5 00:22:32.179 16:18:33 -- target/dif.sh@105 -- # create_subsystems 0 00:22:32.179 16:18:33 -- target/dif.sh@28 -- # local sub 00:22:32.179 16:18:33 -- target/dif.sh@30 -- # for sub in "$@" 00:22:32.179 16:18:33 -- target/dif.sh@31 -- # create_subsystem 0 00:22:32.179 16:18:33 -- target/dif.sh@18 -- # local sub_id=0 00:22:32.179 16:18:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 bdev_null0 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 16:18:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.179 16:18:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:32.179 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.179 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.437 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.437 16:18:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.437 16:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.438 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:32.438 [2024-04-24 16:18:33.471158] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.438 16:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.438 16:18:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:32.438 16:18:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:32.438 16:18:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:32.438 16:18:33 -- nvmf/common.sh@521 -- # config=() 00:22:32.438 16:18:33 -- nvmf/common.sh@521 -- # local subsystem config 00:22:32.438 16:18:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:32.438 16:18:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:32.438 { 00:22:32.438 "params": { 00:22:32.438 "name": "Nvme$subsystem", 00:22:32.438 "trtype": "$TEST_TRANSPORT", 00:22:32.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.438 "adrfam": "ipv4", 00:22:32.438 "trsvcid": "$NVMF_PORT", 00:22:32.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.438 "hdgst": ${hdgst:-false}, 00:22:32.438 "ddgst": ${ddgst:-false} 00:22:32.438 }, 00:22:32.438 "method": "bdev_nvme_attach_controller" 00:22:32.438 } 00:22:32.438 EOF 00:22:32.438 )") 00:22:32.438 16:18:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.438 16:18:33 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.438 16:18:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:32.438 16:18:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.438 16:18:33 -- target/dif.sh@82 -- # gen_fio_conf 00:22:32.438 16:18:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:32.438 16:18:33 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:32.438 16:18:33 -- target/dif.sh@54 -- # local file 00:22:32.438 16:18:33 -- common/autotest_common.sh@1327 -- # shift 00:22:32.438 16:18:33 -- target/dif.sh@56 -- # cat 00:22:32.438 16:18:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:32.438 16:18:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.438 16:18:33 -- nvmf/common.sh@543 -- # cat 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:32.438 16:18:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:32.438 16:18:33 -- target/dif.sh@72 -- # (( file <= files )) 00:22:32.438 16:18:33 -- nvmf/common.sh@545 -- # jq . 00:22:32.438 16:18:33 -- nvmf/common.sh@546 -- # IFS=, 00:22:32.438 16:18:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:32.438 "params": { 00:22:32.438 "name": "Nvme0", 00:22:32.438 "trtype": "tcp", 00:22:32.438 "traddr": "10.0.0.2", 00:22:32.438 "adrfam": "ipv4", 00:22:32.438 "trsvcid": "4420", 00:22:32.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:32.438 "hdgst": false, 00:22:32.438 "ddgst": false 00:22:32.438 }, 00:22:32.438 "method": "bdev_nvme_attach_controller" 00:22:32.438 }' 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:32.438 16:18:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:32.438 16:18:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:32.438 16:18:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:32.438 16:18:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:32.438 16:18:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:32.438 16:18:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.696 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:32.696 ... 00:22:32.696 fio-3.35 00:22:32.696 Starting 3 threads 00:22:32.696 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.253 00:22:39.253 filename0: (groupid=0, jobs=1): err= 0: pid=3492308: Wed Apr 24 16:18:39 2024 00:22:39.253 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(135MiB/5007msec) 00:22:39.253 slat (nsec): min=5936, max=35118, avg=12592.96, stdev=3368.54 00:22:39.253 clat (usec): min=5468, max=90301, avg=13939.70, stdev=12179.22 00:22:39.253 lat (usec): min=5480, max=90314, avg=13952.29, stdev=12179.17 00:22:39.253 clat percentiles (usec): 00:22:39.253 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 8455], 00:22:39.253 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[11338], 00:22:39.253 | 70.00th=[12518], 80.00th=[13566], 90.00th=[16188], 95.00th=[51119], 00:22:39.253 | 99.00th=[54264], 99.50th=[55313], 99.90th=[88605], 99.95th=[90702], 00:22:39.253 | 99.99th=[90702] 00:22:39.253 bw ( KiB/s): min=23040, max=34304, per=36.13%, avg=27474.10, stdev=3388.91, samples=10 00:22:39.253 iops : min= 180, max= 268, avg=214.60, stdev=26.48, samples=10 00:22:39.253 lat (msec) : 10=50.00%, 20=41.26%, 50=2.88%, 100=5.86% 00:22:39.253 cpu : usr=89.59%, sys=9.97%, ctx=15, majf=0, minf=57 00:22:39.253 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:39.253 filename0: (groupid=0, jobs=1): err= 0: pid=3492309: Wed Apr 24 16:18:39 2024 00:22:39.253 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5047msec) 00:22:39.253 slat (nsec): min=5110, max=88448, avg=12013.47, stdev=3998.63 00:22:39.253 clat (usec): min=5519, max=55986, avg=14996.39, stdev=13291.95 00:22:39.253 lat (usec): min=5531, max=55999, avg=15008.40, stdev=13292.25 00:22:39.253 clat percentiles (usec): 00:22:39.253 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 8291], 00:22:39.253 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11600], 00:22:39.253 | 70.00th=[12780], 80.00th=[13829], 90.00th=[48497], 95.00th=[51643], 00:22:39.253 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:22:39.253 | 99.99th=[55837] 00:22:39.253 bw ( KiB/s): min=17152, max=35328, per=33.67%, avg=25600.00, stdev=6601.07, samples=10 00:22:39.253 iops : min= 134, max= 276, avg=200.00, stdev=51.57, samples=10 00:22:39.253 lat (msec) : 10=44.77%, 20=43.77%, 50=3.79%, 100=7.68% 00:22:39.253 cpu : usr=90.73%, sys=8.86%, ctx=15, majf=0, minf=101 00:22:39.253 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 issued rwts: total=1003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:39.253 filename0: (groupid=0, jobs=1): err= 0: pid=3492310: Wed Apr 24 16:18:39 2024 00:22:39.253 read: IOPS=183, BW=22.9MiB/s (24.1MB/s)(115MiB/5007msec) 00:22:39.253 slat (nsec): min=5909, max=33915, avg=12240.27, stdev=2965.82 00:22:39.253 clat (usec): min=5372, max=88471, avg=16324.71, stdev=14172.28 00:22:39.253 lat (usec): min=5383, max=88483, avg=16336.95, stdev=14172.14 00:22:39.253 clat percentiles (usec): 00:22:39.253 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 8848], 00:22:39.253 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11207], 60.00th=[12387], 00:22:39.253 | 70.00th=[13566], 80.00th=[15008], 90.00th=[50070], 95.00th=[52167], 00:22:39.253 | 99.00th=[54789], 99.50th=[55313], 99.90th=[88605], 99.95th=[88605], 00:22:39.253 | 99.99th=[88605] 00:22:39.253 bw ( KiB/s): min=14848, max=29952, per=30.84%, avg=23449.60, stdev=4902.32, samples=10 00:22:39.253 iops : min= 116, max= 234, avg=183.20, stdev=38.30, samples=10 00:22:39.253 lat (msec) : 10=37.98%, 20=48.75%, 50=3.59%, 100=9.68% 00:22:39.253 cpu : usr=90.49%, sys=9.07%, ctx=11, majf=0, minf=61 00:22:39.253 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.253 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:39.253 00:22:39.253 Run status group 0 (all jobs): 00:22:39.253 READ: bw=74.3MiB/s (77.9MB/s), 22.9MiB/s-26.9MiB/s (24.1MB/s-28.2MB/s), io=375MiB (393MB), run=5007-5047msec 00:22:39.253 16:18:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:39.253 16:18:39 -- target/dif.sh@43 -- # local sub 00:22:39.253 16:18:39 -- target/dif.sh@45 -- # for sub in "$@" 00:22:39.253 16:18:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:39.253 16:18:39 -- target/dif.sh@36 -- # local sub_id=0 00:22:39.253 16:18:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # bs=4k 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # numjobs=8 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # iodepth=16 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # runtime= 00:22:39.253 16:18:39 -- target/dif.sh@109 -- # files=2 00:22:39.253 16:18:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:39.253 16:18:39 -- target/dif.sh@28 -- # local sub 00:22:39.253 16:18:39 -- target/dif.sh@30 -- # for sub in "$@" 00:22:39.253 16:18:39 -- target/dif.sh@31 -- # create_subsystem 0 00:22:39.253 16:18:39 -- target/dif.sh@18 -- # local sub_id=0 00:22:39.253 16:18:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 bdev_null0 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 [2024-04-24 16:18:39.728307] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@30 -- # for sub in "$@" 00:22:39.253 16:18:39 -- target/dif.sh@31 -- # create_subsystem 1 00:22:39.253 16:18:39 -- target/dif.sh@18 -- # local sub_id=1 00:22:39.253 16:18:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.253 bdev_null1 00:22:39.253 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.253 16:18:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:39.253 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.253 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@30 -- # for sub in "$@" 00:22:39.254 16:18:39 -- target/dif.sh@31 -- # create_subsystem 2 00:22:39.254 16:18:39 -- target/dif.sh@18 -- # local sub_id=2 00:22:39.254 16:18:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 bdev_null2 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:39.254 16:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.254 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:39.254 16:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.254 16:18:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:39.254 16:18:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:39.254 16:18:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:39.254 16:18:39 -- nvmf/common.sh@521 -- # config=() 00:22:39.254 16:18:39 -- nvmf/common.sh@521 -- # local subsystem config 00:22:39.254 16:18:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:39.254 { 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme$subsystem", 00:22:39.254 "trtype": "$TEST_TRANSPORT", 00:22:39.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "$NVMF_PORT", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.254 "hdgst": ${hdgst:-false}, 00:22:39.254 "ddgst": ${ddgst:-false} 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 } 00:22:39.254 EOF 00:22:39.254 )") 00:22:39.254 16:18:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.254 16:18:39 -- target/dif.sh@82 -- # gen_fio_conf 00:22:39.254 16:18:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.254 16:18:39 -- target/dif.sh@54 -- # local file 00:22:39.254 16:18:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:39.254 16:18:39 -- target/dif.sh@56 -- # cat 00:22:39.254 16:18:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.254 16:18:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:39.254 16:18:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:39.254 16:18:39 -- common/autotest_common.sh@1327 -- # shift 00:22:39.254 16:18:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # cat 00:22:39.254 16:18:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file <= files )) 00:22:39.254 16:18:39 -- target/dif.sh@73 -- # cat 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:39.254 16:18:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:39.254 { 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme$subsystem", 00:22:39.254 "trtype": "$TEST_TRANSPORT", 00:22:39.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "$NVMF_PORT", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.254 "hdgst": ${hdgst:-false}, 00:22:39.254 "ddgst": ${ddgst:-false} 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 } 00:22:39.254 EOF 00:22:39.254 )") 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file++ )) 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # cat 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file <= files )) 00:22:39.254 16:18:39 -- target/dif.sh@73 -- # cat 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file++ )) 00:22:39.254 16:18:39 -- target/dif.sh@72 -- # (( file <= files )) 00:22:39.254 16:18:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:39.254 { 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme$subsystem", 00:22:39.254 "trtype": "$TEST_TRANSPORT", 00:22:39.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "$NVMF_PORT", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.254 "hdgst": ${hdgst:-false}, 00:22:39.254 "ddgst": ${ddgst:-false} 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 } 00:22:39.254 EOF 00:22:39.254 )") 00:22:39.254 16:18:39 -- nvmf/common.sh@543 -- # cat 00:22:39.254 16:18:39 -- nvmf/common.sh@545 -- # jq . 00:22:39.254 16:18:39 -- nvmf/common.sh@546 -- # IFS=, 00:22:39.254 16:18:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme0", 00:22:39.254 "trtype": "tcp", 00:22:39.254 "traddr": "10.0.0.2", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "4420", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:39.254 "hdgst": false, 00:22:39.254 "ddgst": false 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 },{ 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme1", 00:22:39.254 "trtype": "tcp", 00:22:39.254 "traddr": "10.0.0.2", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "4420", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.254 "hdgst": false, 00:22:39.254 "ddgst": false 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 },{ 00:22:39.254 "params": { 00:22:39.254 "name": "Nvme2", 00:22:39.254 "trtype": "tcp", 00:22:39.254 "traddr": "10.0.0.2", 00:22:39.254 "adrfam": "ipv4", 00:22:39.254 "trsvcid": "4420", 00:22:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:39.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:39.254 "hdgst": false, 00:22:39.254 "ddgst": false 00:22:39.254 }, 00:22:39.254 "method": "bdev_nvme_attach_controller" 00:22:39.254 }' 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:39.254 16:18:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:39.254 16:18:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:39.254 16:18:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:39.254 16:18:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:39.254 16:18:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:39.254 16:18:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:39.254 ... 00:22:39.254 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:39.254 ... 00:22:39.254 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:39.254 ... 00:22:39.254 fio-3.35 00:22:39.254 Starting 24 threads 00:22:39.254 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.454 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493177: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=462, BW=1850KiB/s (1895kB/s)(18.1MiB/10005msec) 00:22:51.455 slat (nsec): min=8046, max=88596, avg=26249.13, stdev=9497.00 00:22:51.455 clat (usec): min=5164, max=58980, avg=34373.91, stdev=3635.83 00:22:51.455 lat (usec): min=5173, max=59015, avg=34400.16, stdev=3636.28 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[25822], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:22:51.455 | 99.00th=[52167], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:22:51.455 | 99.99th=[58983] 00:22:51.455 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1842.53, stdev=69.59, samples=19 00:22:51.455 iops : min= 416, max= 480, avg=460.63, stdev=17.40, samples=19 00:22:51.455 lat (msec) : 10=0.04%, 20=0.71%, 50=97.64%, 100=1.60% 00:22:51.455 cpu : usr=98.13%, sys=1.47%, ctx=14, majf=0, minf=9 00:22:51.455 IO depths : 1=3.0%, 2=8.3%, 4=21.6%, 8=57.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=93.4%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493178: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=464, BW=1859KiB/s (1903kB/s)(18.2MiB/10019msec) 00:22:51.455 slat (usec): min=4, max=121, avg=31.43, stdev= 9.14 00:22:51.455 clat (usec): min=18831, max=42731, avg=34157.81, stdev=1118.05 00:22:51.455 lat (usec): min=18836, max=42750, avg=34189.24, stdev=1119.74 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[35914], 99.50th=[36439], 99.90th=[42730], 99.95th=[42730], 00:22:51.455 | 99.99th=[42730] 00:22:51.455 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.00, stdev=65.66, samples=20 00:22:51.455 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.455 lat (msec) : 20=0.34%, 50=99.66% 00:22:51.455 cpu : usr=93.49%, sys=3.61%, ctx=272, majf=0, minf=9 00:22:51.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493179: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10013msec) 00:22:51.455 slat (nsec): min=13694, max=78878, avg=32533.33, stdev=7427.69 00:22:51.455 clat (usec): min=22330, max=69921, avg=34243.99, stdev=2220.21 00:22:51.455 lat (usec): min=22356, max=69984, avg=34276.52, stdev=2221.51 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[35390], 99.50th=[35914], 99.90th=[69731], 99.95th=[69731], 00:22:51.455 | 99.99th=[69731] 00:22:51.455 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1845.89, stdev=77.69, samples=19 00:22:51.455 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:22:51.455 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.455 cpu : usr=98.27%, sys=1.35%, ctx=12, majf=0, minf=9 00:22:51.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493180: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10022msec) 00:22:51.455 slat (usec): min=11, max=110, avg=37.85, stdev=16.03 00:22:51.455 clat (usec): min=22152, max=43971, avg=34045.54, stdev=987.14 00:22:51.455 lat (usec): min=22186, max=44010, avg=34083.39, stdev=988.66 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:22:51.455 | 99.99th=[43779] 00:22:51.455 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.00, stdev=65.66, samples=20 00:22:51.455 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.455 lat (msec) : 50=100.00% 00:22:51.455 cpu : usr=98.11%, sys=1.45%, ctx=14, majf=0, minf=9 00:22:51.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493181: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=465, BW=1863KiB/s (1908kB/s)(18.2MiB/10029msec) 00:22:51.455 slat (usec): min=8, max=129, avg=41.04, stdev=29.43 00:22:51.455 clat (usec): min=18641, max=42225, avg=33984.45, stdev=1427.91 00:22:51.455 lat (usec): min=18653, max=42280, avg=34025.50, stdev=1424.19 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[28967], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[35914], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:22:51.455 | 99.99th=[42206] 00:22:51.455 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1862.55, stdev=65.17, samples=20 00:22:51.455 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:22:51.455 lat (msec) : 20=0.34%, 50=99.66% 00:22:51.455 cpu : usr=97.68%, sys=1.87%, ctx=20, majf=0, minf=9 00:22:51.455 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493182: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=466, BW=1865KiB/s (1909kB/s)(18.2MiB/10023msec) 00:22:51.455 slat (usec): min=3, max=108, avg=20.07, stdev=10.16 00:22:51.455 clat (usec): min=11551, max=52587, avg=34118.14, stdev=2811.98 00:22:51.455 lat (usec): min=11560, max=52601, avg=34138.21, stdev=2812.01 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[19268], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[42206], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:22:51.455 | 99.99th=[52691] 00:22:51.455 bw ( KiB/s): min= 1792, max= 1936, per=4.18%, avg=1862.40, stdev=59.51, samples=20 00:22:51.455 iops : min= 448, max= 484, avg=465.60, stdev=14.88, samples=20 00:22:51.455 lat (msec) : 20=1.33%, 50=98.24%, 100=0.43% 00:22:51.455 cpu : usr=93.80%, sys=3.54%, ctx=335, majf=0, minf=9 00:22:51.455 IO depths : 1=3.7%, 2=10.0%, 4=24.9%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493183: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:22:51.455 slat (usec): min=10, max=106, avg=39.59, stdev=16.20 00:22:51.455 clat (usec): min=17583, max=49335, avg=34137.45, stdev=926.57 00:22:51.455 lat (usec): min=17622, max=49413, avg=34177.04, stdev=921.89 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.455 | 99.00th=[35914], 99.50th=[36439], 99.90th=[44303], 99.95th=[44303], 00:22:51.455 | 99.99th=[49546] 00:22:51.455 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1852.63, stdev=65.66, samples=19 00:22:51.455 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:22:51.455 lat (msec) : 20=0.04%, 50=99.96% 00:22:51.455 cpu : usr=92.56%, sys=3.84%, ctx=405, majf=0, minf=9 00:22:51.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.455 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.455 filename0: (groupid=0, jobs=1): err= 0: pid=3493184: Wed Apr 24 16:18:51 2024 00:22:51.455 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10005msec) 00:22:51.455 slat (usec): min=7, max=129, avg=37.03, stdev=15.20 00:22:51.455 clat (usec): min=4666, max=58178, avg=34039.32, stdev=2315.85 00:22:51.455 lat (usec): min=4674, max=58218, avg=34076.35, stdev=2316.14 00:22:51.455 clat percentiles (usec): 00:22:51.455 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:51.455 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.455 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35390], 99.50th=[35914], 99.90th=[56361], 99.95th=[56361], 00:22:51.456 | 99.99th=[57934] 00:22:51.456 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1852.63, stdev=78.31, samples=19 00:22:51.456 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.456 lat (msec) : 10=0.34%, 50=99.31%, 100=0.34% 00:22:51.456 cpu : usr=92.93%, sys=3.69%, ctx=180, majf=0, minf=9 00:22:51.456 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493185: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.1MiB/10006msec) 00:22:51.456 slat (nsec): min=5819, max=96267, avg=33901.70, stdev=9573.17 00:22:51.456 clat (usec): min=20590, max=60986, avg=34194.92, stdev=1824.02 00:22:51.456 lat (usec): min=20615, max=61002, avg=34228.82, stdev=1823.18 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35914], 99.50th=[36439], 99.90th=[61080], 99.95th=[61080], 00:22:51.456 | 99.99th=[61080] 00:22:51.456 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1845.89, stdev=77.69, samples=19 00:22:51.456 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:22:51.456 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.456 cpu : usr=98.27%, sys=1.33%, ctx=25, majf=0, minf=9 00:22:51.456 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493187: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10010msec) 00:22:51.456 slat (nsec): min=3730, max=61490, avg=13496.85, stdev=7739.22 00:22:51.456 clat (usec): min=13523, max=49797, avg=34156.66, stdev=1968.96 00:22:51.456 lat (usec): min=13537, max=49807, avg=34170.16, stdev=1969.31 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[21627], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35914], 99.50th=[36439], 99.90th=[41681], 99.95th=[49546], 00:22:51.456 | 99.99th=[49546] 00:22:51.456 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1862.40, stdev=65.33, samples=20 00:22:51.456 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:22:51.456 lat (msec) : 20=0.68%, 50=99.32% 00:22:51.456 cpu : usr=96.96%, sys=2.02%, ctx=127, majf=0, minf=9 00:22:51.456 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493188: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=464, BW=1860KiB/s (1904kB/s)(18.2MiB/10014msec) 00:22:51.456 slat (nsec): min=7332, max=71961, avg=28989.58, stdev=9512.88 00:22:51.456 clat (usec): min=17809, max=43060, avg=34175.85, stdev=1290.94 00:22:51.456 lat (usec): min=17818, max=43089, avg=34204.84, stdev=1291.83 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[32113], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35914], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:22:51.456 | 99.99th=[43254] 00:22:51.456 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.15, stdev=65.51, samples=20 00:22:51.456 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.456 lat (msec) : 20=0.34%, 50=99.66% 00:22:51.456 cpu : usr=97.06%, sys=2.17%, ctx=124, majf=0, minf=9 00:22:51.456 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493189: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10013msec) 00:22:51.456 slat (nsec): min=13867, max=82438, avg=32839.49, stdev=7442.57 00:22:51.456 clat (usec): min=22251, max=69080, avg=34236.30, stdev=2171.85 00:22:51.456 lat (usec): min=22283, max=69160, avg=34269.14, stdev=2173.51 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35390], 99.50th=[35914], 99.90th=[68682], 99.95th=[68682], 00:22:51.456 | 99.99th=[68682] 00:22:51.456 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1845.89, stdev=77.69, samples=19 00:22:51.456 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:22:51.456 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.456 cpu : usr=98.01%, sys=1.52%, ctx=52, majf=0, minf=9 00:22:51.456 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493190: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:22:51.456 slat (usec): min=8, max=108, avg=35.70, stdev=14.14 00:22:51.456 clat (usec): min=31337, max=45522, avg=34179.78, stdev=789.92 00:22:51.456 lat (usec): min=31375, max=45543, avg=34215.48, stdev=786.20 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[35914], 99.50th=[36439], 99.90th=[44303], 99.95th=[44303], 00:22:51.456 | 99.99th=[45351] 00:22:51.456 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1852.63, stdev=65.66, samples=19 00:22:51.456 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:22:51.456 lat (msec) : 50=100.00% 00:22:51.456 cpu : usr=94.50%, sys=3.35%, ctx=177, majf=0, minf=9 00:22:51.456 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493191: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=464, BW=1859KiB/s (1904kB/s)(18.2MiB/10018msec) 00:22:51.456 slat (usec): min=7, max=103, avg=23.20, stdev=12.02 00:22:51.456 clat (usec): min=17172, max=51463, avg=34229.53, stdev=1374.11 00:22:51.456 lat (usec): min=17245, max=51474, avg=34252.73, stdev=1373.54 00:22:51.456 clat percentiles (usec): 00:22:51.456 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:22:51.456 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.456 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.456 | 99.00th=[36439], 99.50th=[36439], 99.90th=[42730], 99.95th=[50594], 00:22:51.456 | 99.99th=[51643] 00:22:51.456 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.15, stdev=65.51, samples=20 00:22:51.456 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.456 lat (msec) : 20=0.43%, 50=99.48%, 100=0.09% 00:22:51.456 cpu : usr=95.79%, sys=2.57%, ctx=175, majf=0, minf=9 00:22:51.456 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.456 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.456 filename1: (groupid=0, jobs=1): err= 0: pid=3493192: Wed Apr 24 16:18:51 2024 00:22:51.456 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10003msec) 00:22:51.456 slat (nsec): min=8201, max=67733, avg=32765.61, stdev=9813.29 00:22:51.456 clat (usec): min=20557, max=59132, avg=34188.73, stdev=1719.40 00:22:51.457 lat (usec): min=20579, max=59171, avg=34221.50, stdev=1719.79 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.457 | 99.00th=[35914], 99.50th=[36439], 99.90th=[58983], 99.95th=[58983], 00:22:51.457 | 99.99th=[58983] 00:22:51.457 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1852.79, stdev=77.91, samples=19 00:22:51.457 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.457 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.457 cpu : usr=98.31%, sys=1.29%, ctx=11, majf=0, minf=9 00:22:51.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename1: (groupid=0, jobs=1): err= 0: pid=3493193: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10004msec) 00:22:51.457 slat (usec): min=13, max=100, avg=38.93, stdev=13.75 00:22:51.457 clat (usec): min=22250, max=60111, avg=34144.24, stdev=1719.91 00:22:51.457 lat (usec): min=22283, max=60154, avg=34183.17, stdev=1718.74 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.457 | 99.00th=[35390], 99.50th=[35914], 99.90th=[60031], 99.95th=[60031], 00:22:51.457 | 99.99th=[60031] 00:22:51.457 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1852.63, stdev=78.31, samples=19 00:22:51.457 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.457 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.457 cpu : usr=97.89%, sys=1.70%, ctx=18, majf=0, minf=9 00:22:51.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493194: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=464, BW=1857KiB/s (1902kB/s)(18.1MiB/10006msec) 00:22:51.457 slat (nsec): min=8081, max=79295, avg=32705.51, stdev=8906.82 00:22:51.457 clat (usec): min=18213, max=60063, avg=34178.84, stdev=1993.82 00:22:51.457 lat (usec): min=18223, max=60101, avg=34211.55, stdev=1994.91 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[31065], 5.00th=[32900], 10.00th=[33817], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:22:51.457 | 99.00th=[36439], 99.50th=[43779], 99.90th=[57934], 99.95th=[58459], 00:22:51.457 | 99.99th=[60031] 00:22:51.457 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1855.16, stdev=77.01, samples=19 00:22:51.457 iops : min= 416, max= 480, avg=463.79, stdev=19.25, samples=19 00:22:51.457 lat (msec) : 20=0.13%, 50=99.53%, 100=0.34% 00:22:51.457 cpu : usr=94.28%, sys=3.23%, ctx=148, majf=0, minf=9 00:22:51.457 IO depths : 1=3.9%, 2=9.5%, 4=22.8%, 8=55.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493195: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=466, BW=1865KiB/s (1910kB/s)(18.2MiB/10021msec) 00:22:51.457 slat (usec): min=5, max=108, avg=39.82, stdev=15.33 00:22:51.457 clat (usec): min=4680, max=41645, avg=33917.95, stdev=1876.40 00:22:51.457 lat (usec): min=4691, max=41677, avg=33957.76, stdev=1879.70 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:22:51.457 | 99.00th=[35390], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:22:51.457 | 99.99th=[41681] 00:22:51.457 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1862.40, stdev=65.33, samples=20 00:22:51.457 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:22:51.457 lat (msec) : 10=0.04%, 20=0.60%, 50=99.36% 00:22:51.457 cpu : usr=98.23%, sys=1.33%, ctx=10, majf=0, minf=9 00:22:51.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493196: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:22:51.457 slat (usec): min=10, max=118, avg=48.90, stdev=21.42 00:22:51.457 clat (usec): min=16351, max=51823, avg=34068.19, stdev=2798.41 00:22:51.457 lat (usec): min=16404, max=51889, avg=34117.09, stdev=2798.78 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[17957], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:22:51.457 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:22:51.457 | 99.99th=[51643] 00:22:51.457 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1852.63, stdev=64.13, samples=19 00:22:51.457 iops : min= 448, max= 480, avg=463.16, stdev=16.03, samples=19 00:22:51.457 lat (msec) : 20=1.42%, 50=98.43%, 100=0.15% 00:22:51.457 cpu : usr=98.41%, sys=1.17%, ctx=25, majf=0, minf=9 00:22:51.457 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493197: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10011msec) 00:22:51.457 slat (usec): min=10, max=106, avg=38.81, stdev=15.73 00:22:51.457 clat (usec): min=22311, max=66189, avg=34172.10, stdev=2051.17 00:22:51.457 lat (usec): min=22342, max=66228, avg=34210.91, stdev=2049.02 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.457 | 99.00th=[35390], 99.50th=[35914], 99.90th=[66323], 99.95th=[66323], 00:22:51.457 | 99.99th=[66323] 00:22:51.457 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1852.63, stdev=78.31, samples=19 00:22:51.457 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.457 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.457 cpu : usr=97.90%, sys=1.70%, ctx=19, majf=0, minf=9 00:22:51.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493198: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=464, BW=1859KiB/s (1904kB/s)(18.2MiB/10018msec) 00:22:51.457 slat (usec): min=6, max=119, avg=38.10, stdev=16.55 00:22:51.457 clat (usec): min=17808, max=43019, avg=34106.39, stdev=1198.92 00:22:51.457 lat (usec): min=17816, max=43048, avg=34144.49, stdev=1197.52 00:22:51.457 clat percentiles (usec): 00:22:51.457 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:22:51.457 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.457 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.457 | 99.00th=[35914], 99.50th=[36439], 99.90th=[42730], 99.95th=[42730], 00:22:51.457 | 99.99th=[43254] 00:22:51.457 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.15, stdev=65.51, samples=20 00:22:51.457 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.457 lat (msec) : 20=0.34%, 50=99.66% 00:22:51.457 cpu : usr=98.22%, sys=1.38%, ctx=13, majf=0, minf=9 00:22:51.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.457 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.457 filename2: (groupid=0, jobs=1): err= 0: pid=3493199: Wed Apr 24 16:18:51 2024 00:22:51.457 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.1MiB/10006msec) 00:22:51.457 slat (nsec): min=11408, max=92396, avg=34917.96, stdev=9617.71 00:22:51.457 clat (usec): min=22266, max=62161, avg=34180.28, stdev=1817.81 00:22:51.458 lat (usec): min=22298, max=62199, avg=34215.20, stdev=1817.70 00:22:51.458 clat percentiles (usec): 00:22:51.458 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.458 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.458 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.458 | 99.00th=[35390], 99.50th=[35914], 99.90th=[62129], 99.95th=[62129], 00:22:51.458 | 99.99th=[62129] 00:22:51.458 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1852.63, stdev=78.31, samples=19 00:22:51.458 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.458 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.458 cpu : usr=98.41%, sys=1.20%, ctx=12, majf=0, minf=9 00:22:51.458 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.458 filename2: (groupid=0, jobs=1): err= 0: pid=3493200: Wed Apr 24 16:18:51 2024 00:22:51.458 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:22:51.458 slat (nsec): min=8293, max=94689, avg=32921.44, stdev=10397.68 00:22:51.458 clat (usec): min=20569, max=59228, avg=34177.13, stdev=1725.92 00:22:51.458 lat (usec): min=20577, max=59267, avg=34210.05, stdev=1726.36 00:22:51.458 clat percentiles (usec): 00:22:51.458 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.458 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:22:51.458 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.458 | 99.00th=[35914], 99.50th=[36439], 99.90th=[58983], 99.95th=[58983], 00:22:51.458 | 99.99th=[58983] 00:22:51.458 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1852.63, stdev=78.31, samples=19 00:22:51.458 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:22:51.458 lat (msec) : 50=99.66%, 100=0.34% 00:22:51.458 cpu : usr=94.53%, sys=3.17%, ctx=269, majf=0, minf=9 00:22:51.458 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:51.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.458 filename2: (groupid=0, jobs=1): err= 0: pid=3493201: Wed Apr 24 16:18:51 2024 00:22:51.458 read: IOPS=464, BW=1859KiB/s (1903kB/s)(18.2MiB/10020msec) 00:22:51.458 slat (nsec): min=7206, max=65259, avg=30193.99, stdev=9757.94 00:22:51.458 clat (usec): min=22524, max=43714, avg=34186.63, stdev=990.92 00:22:51.458 lat (usec): min=22559, max=43728, avg=34216.83, stdev=989.97 00:22:51.458 clat percentiles (usec): 00:22:51.458 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:22:51.458 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:22:51.458 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:22:51.458 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:22:51.458 | 99.99th=[43779] 00:22:51.458 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.00, stdev=65.66, samples=20 00:22:51.458 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:22:51.458 lat (msec) : 50=100.00% 00:22:51.458 cpu : usr=98.21%, sys=1.40%, ctx=12, majf=0, minf=9 00:22:51.458 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:51.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.458 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:51.458 00:22:51.458 Run status group 0 (all jobs): 00:22:51.458 READ: bw=43.5MiB/s (45.6MB/s), 1850KiB/s-1867KiB/s (1895kB/s-1912kB/s), io=436MiB (457MB), run=10002-10029msec 00:22:51.458 16:18:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:51.458 16:18:51 -- target/dif.sh@43 -- # local sub 00:22:51.458 16:18:51 -- target/dif.sh@45 -- # for sub in "$@" 00:22:51.458 16:18:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:51.458 16:18:51 -- target/dif.sh@36 -- # local sub_id=0 00:22:51.458 16:18:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@45 -- # for sub in "$@" 00:22:51.458 16:18:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:51.458 16:18:51 -- target/dif.sh@36 -- # local sub_id=1 00:22:51.458 16:18:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@45 -- # for sub in "$@" 00:22:51.458 16:18:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:51.458 16:18:51 -- target/dif.sh@36 -- # local sub_id=2 00:22:51.458 16:18:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # numjobs=2 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # iodepth=8 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # runtime=5 00:22:51.458 16:18:51 -- target/dif.sh@115 -- # files=1 00:22:51.458 16:18:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:51.458 16:18:51 -- target/dif.sh@28 -- # local sub 00:22:51.458 16:18:51 -- target/dif.sh@30 -- # for sub in "$@" 00:22:51.458 16:18:51 -- target/dif.sh@31 -- # create_subsystem 0 00:22:51.458 16:18:51 -- target/dif.sh@18 -- # local sub_id=0 00:22:51.458 16:18:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 bdev_null0 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 [2024-04-24 16:18:51.574939] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@30 -- # for sub in "$@" 00:22:51.458 16:18:51 -- target/dif.sh@31 -- # create_subsystem 1 00:22:51.458 16:18:51 -- target/dif.sh@18 -- # local sub_id=1 00:22:51.458 16:18:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.458 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.458 bdev_null1 00:22:51.458 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.458 16:18:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:51.458 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.459 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.459 16:18:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:51.459 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.459 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.459 16:18:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.459 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.459 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.459 16:18:51 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:51.459 16:18:51 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:51.459 16:18:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:51.459 16:18:51 -- nvmf/common.sh@521 -- # config=() 00:22:51.459 16:18:51 -- nvmf/common.sh@521 -- # local subsystem config 00:22:51.459 16:18:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:51.459 16:18:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:51.459 { 00:22:51.459 "params": { 00:22:51.459 "name": "Nvme$subsystem", 00:22:51.459 "trtype": "$TEST_TRANSPORT", 00:22:51.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.459 "adrfam": "ipv4", 00:22:51.459 "trsvcid": "$NVMF_PORT", 00:22:51.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.459 "hdgst": ${hdgst:-false}, 00:22:51.459 "ddgst": ${ddgst:-false} 00:22:51.459 }, 00:22:51.459 "method": "bdev_nvme_attach_controller" 00:22:51.459 } 00:22:51.459 EOF 00:22:51.459 )") 00:22:51.459 16:18:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:51.459 16:18:51 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:51.459 16:18:51 -- target/dif.sh@82 -- # gen_fio_conf 00:22:51.459 16:18:51 -- target/dif.sh@54 -- # local file 00:22:51.459 16:18:51 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:51.459 16:18:51 -- target/dif.sh@56 -- # cat 00:22:51.459 16:18:51 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.459 16:18:51 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:51.459 16:18:51 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:51.459 16:18:51 -- common/autotest_common.sh@1327 -- # shift 00:22:51.459 16:18:51 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:51.459 16:18:51 -- nvmf/common.sh@543 -- # cat 00:22:51.459 16:18:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:51.459 16:18:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:51.459 16:18:51 -- target/dif.sh@72 -- # (( file <= files )) 00:22:51.459 16:18:51 -- target/dif.sh@73 -- # cat 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:51.459 16:18:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:51.459 16:18:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:51.459 { 00:22:51.459 "params": { 00:22:51.459 "name": "Nvme$subsystem", 00:22:51.459 "trtype": "$TEST_TRANSPORT", 00:22:51.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.459 "adrfam": "ipv4", 00:22:51.459 "trsvcid": "$NVMF_PORT", 00:22:51.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.459 "hdgst": ${hdgst:-false}, 00:22:51.459 "ddgst": ${ddgst:-false} 00:22:51.459 }, 00:22:51.459 "method": "bdev_nvme_attach_controller" 00:22:51.459 } 00:22:51.459 EOF 00:22:51.459 )") 00:22:51.459 16:18:51 -- nvmf/common.sh@543 -- # cat 00:22:51.459 16:18:51 -- target/dif.sh@72 -- # (( file++ )) 00:22:51.459 16:18:51 -- target/dif.sh@72 -- # (( file <= files )) 00:22:51.459 16:18:51 -- nvmf/common.sh@545 -- # jq . 00:22:51.459 16:18:51 -- nvmf/common.sh@546 -- # IFS=, 00:22:51.459 16:18:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:51.459 "params": { 00:22:51.459 "name": "Nvme0", 00:22:51.459 "trtype": "tcp", 00:22:51.459 "traddr": "10.0.0.2", 00:22:51.459 "adrfam": "ipv4", 00:22:51.459 "trsvcid": "4420", 00:22:51.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:51.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:51.459 "hdgst": false, 00:22:51.459 "ddgst": false 00:22:51.459 }, 00:22:51.459 "method": "bdev_nvme_attach_controller" 00:22:51.459 },{ 00:22:51.459 "params": { 00:22:51.459 "name": "Nvme1", 00:22:51.459 "trtype": "tcp", 00:22:51.459 "traddr": "10.0.0.2", 00:22:51.459 "adrfam": "ipv4", 00:22:51.459 "trsvcid": "4420", 00:22:51.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.459 "hdgst": false, 00:22:51.459 "ddgst": false 00:22:51.459 }, 00:22:51.459 "method": "bdev_nvme_attach_controller" 00:22:51.459 }' 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:51.459 16:18:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:51.459 16:18:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:51.459 16:18:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:51.459 16:18:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:51.459 16:18:51 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:51.459 16:18:51 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:51.459 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:51.459 ... 00:22:51.459 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:51.459 ... 00:22:51.459 fio-3.35 00:22:51.459 Starting 4 threads 00:22:51.459 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.715 00:22:56.715 filename0: (groupid=0, jobs=1): err= 0: pid=3494574: Wed Apr 24 16:18:57 2024 00:22:56.715 read: IOPS=1822, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5002msec) 00:22:56.715 slat (nsec): min=7454, max=52654, avg=15562.87, stdev=7503.65 00:22:56.715 clat (usec): min=914, max=7548, avg=4341.70, stdev=722.35 00:22:56.715 lat (usec): min=928, max=7581, avg=4357.27, stdev=721.55 00:22:56.715 clat percentiles (usec): 00:22:56.715 | 1.00th=[ 3163], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3916], 00:22:56.715 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4228], 00:22:56.715 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5473], 95.00th=[ 6128], 00:22:56.715 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7504], 00:22:56.715 | 99.99th=[ 7570] 00:22:56.715 bw ( KiB/s): min=13984, max=15312, per=24.56%, avg=14575.70, stdev=396.91, samples=10 00:22:56.715 iops : min= 1748, max= 1914, avg=1821.90, stdev=49.62, samples=10 00:22:56.715 lat (usec) : 1000=0.02% 00:22:56.715 lat (msec) : 2=0.03%, 4=25.25%, 10=74.69% 00:22:56.715 cpu : usr=95.70%, sys=3.82%, ctx=12, majf=0, minf=38 00:22:56.715 IO depths : 1=0.2%, 2=3.5%, 4=69.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 issued rwts: total=9116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:56.715 filename0: (groupid=0, jobs=1): err= 0: pid=3494575: Wed Apr 24 16:18:57 2024 00:22:56.715 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5001msec) 00:22:56.715 slat (nsec): min=6965, max=59680, avg=14202.49, stdev=7190.43 00:22:56.715 clat (usec): min=1166, max=47280, avg=4324.43, stdev=1416.19 00:22:56.715 lat (usec): min=1179, max=47304, avg=4338.63, stdev=1415.79 00:22:56.715 clat percentiles (usec): 00:22:56.715 | 1.00th=[ 3064], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3916], 00:22:56.715 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:22:56.715 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5800], 00:22:56.715 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[47449], 00:22:56.715 | 99.99th=[47449] 00:22:56.715 bw ( KiB/s): min=13392, max=15216, per=24.63%, avg=14620.44, stdev=662.35, samples=9 00:22:56.715 iops : min= 1674, max= 1902, avg=1827.56, stdev=82.79, samples=9 00:22:56.715 lat (msec) : 2=0.10%, 4=24.92%, 10=74.89%, 50=0.09% 00:22:56.715 cpu : usr=95.30%, sys=4.26%, ctx=9, majf=0, minf=64 00:22:56.715 IO depths : 1=0.2%, 2=3.6%, 4=68.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 issued rwts: total=9160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:56.715 filename1: (groupid=0, jobs=1): err= 0: pid=3494576: Wed Apr 24 16:18:57 2024 00:22:56.715 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5004msec) 00:22:56.715 slat (nsec): min=6946, max=59678, avg=13028.87, stdev=6762.67 00:22:56.715 clat (usec): min=1301, max=7484, avg=4155.15, stdev=599.60 00:22:56.715 lat (usec): min=1313, max=7492, avg=4168.18, stdev=599.46 00:22:56.715 clat percentiles (usec): 00:22:56.715 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3785], 00:22:56.715 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:22:56.715 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5407], 00:22:56.715 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7242], 99.95th=[ 7439], 00:22:56.715 | 99.99th=[ 7504] 00:22:56.715 bw ( KiB/s): min=14624, max=16032, per=25.69%, avg=15248.00, stdev=538.38, samples=10 00:22:56.715 iops : min= 1828, max= 2004, avg=1906.00, stdev=67.30, samples=10 00:22:56.715 lat (msec) : 2=0.10%, 4=33.77%, 10=66.12% 00:22:56.715 cpu : usr=94.76%, sys=4.78%, ctx=10, majf=0, minf=28 00:22:56.715 IO depths : 1=0.1%, 2=5.0%, 4=67.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.715 issued rwts: total=9538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:56.715 filename1: (groupid=0, jobs=1): err= 0: pid=3494577: Wed Apr 24 16:18:57 2024 00:22:56.715 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:22:56.715 slat (nsec): min=7231, max=54079, avg=13862.04, stdev=7250.40 00:22:56.715 clat (usec): min=1032, max=7474, avg=4253.92, stdev=605.08 00:22:56.715 lat (usec): min=1045, max=7483, avg=4267.79, stdev=604.77 00:22:56.715 clat percentiles (usec): 00:22:56.716 | 1.00th=[ 2999], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3916], 00:22:56.716 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:22:56.716 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5604], 00:22:56.716 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7242], 00:22:56.716 | 99.99th=[ 7504] 00:22:56.716 bw ( KiB/s): min=13968, max=15168, per=25.05%, avg=14867.22, stdev=383.46, samples=9 00:22:56.716 iops : min= 1746, max= 1896, avg=1858.33, stdev=47.91, samples=9 00:22:56.716 lat (msec) : 2=0.18%, 4=26.31%, 10=73.51% 00:22:56.716 cpu : usr=95.50%, sys=4.02%, ctx=13, majf=0, minf=49 00:22:56.716 IO depths : 1=0.1%, 2=4.3%, 4=68.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.716 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.716 issued rwts: total=9310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:56.716 00:22:56.716 Run status group 0 (all jobs): 00:22:56.716 READ: bw=58.0MiB/s (60.8MB/s), 14.2MiB/s-14.9MiB/s (14.9MB/s-15.6MB/s), io=290MiB (304MB), run=5001-5004msec 00:22:56.716 16:18:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:56.716 16:18:57 -- target/dif.sh@43 -- # local sub 00:22:56.716 16:18:57 -- target/dif.sh@45 -- # for sub in "$@" 00:22:56.716 16:18:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:56.716 16:18:57 -- target/dif.sh@36 -- # local sub_id=0 00:22:56.716 16:18:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:56.716 16:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.716 16:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.716 16:18:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:56.716 16:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.716 16:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.716 16:18:57 -- target/dif.sh@45 -- # for sub in "$@" 00:22:56.716 16:18:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:56.716 16:18:57 -- target/dif.sh@36 -- # local sub_id=1 00:22:56.716 16:18:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.716 16:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.716 16:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.716 16:18:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:56.716 16:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.716 16:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.716 00:22:56.716 real 0m24.499s 00:22:56.716 user 4m29.796s 00:22:56.716 sys 0m8.322s 00:22:56.716 16:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.716 ************************************ 00:22:56.716 END TEST fio_dif_rand_params 00:22:56.716 ************************************ 00:22:56.716 16:18:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:56.716 16:18:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:56.716 16:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:56.716 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:56.972 ************************************ 00:22:56.972 START TEST fio_dif_digest 00:22:56.972 ************************************ 00:22:56.972 16:18:58 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:56.972 16:18:58 -- target/dif.sh@123 -- # local NULL_DIF 00:22:56.972 16:18:58 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:56.972 16:18:58 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:56.972 16:18:58 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:56.972 16:18:58 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:56.972 16:18:58 -- target/dif.sh@127 -- # numjobs=3 00:22:56.972 16:18:58 -- target/dif.sh@127 -- # iodepth=3 00:22:56.972 16:18:58 -- target/dif.sh@127 -- # runtime=10 00:22:56.972 16:18:58 -- target/dif.sh@128 -- # hdgst=true 00:22:56.972 16:18:58 -- target/dif.sh@128 -- # ddgst=true 00:22:56.972 16:18:58 -- target/dif.sh@130 -- # create_subsystems 0 00:22:56.972 16:18:58 -- target/dif.sh@28 -- # local sub 00:22:56.972 16:18:58 -- target/dif.sh@30 -- # for sub in "$@" 00:22:56.972 16:18:58 -- target/dif.sh@31 -- # create_subsystem 0 00:22:56.972 16:18:58 -- target/dif.sh@18 -- # local sub_id=0 00:22:56.972 16:18:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:56.972 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.972 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.972 bdev_null0 00:22:56.972 16:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.972 16:18:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:56.972 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.972 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.972 16:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.972 16:18:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:56.972 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.972 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.972 16:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.972 16:18:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:56.972 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.972 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.972 [2024-04-24 16:18:58.096977] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.972 16:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.972 16:18:58 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:56.972 16:18:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:56.972 16:18:58 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:56.972 16:18:58 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:56.972 16:18:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:56.972 16:18:58 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:56.972 16:18:58 -- nvmf/common.sh@521 -- # config=() 00:22:56.972 16:18:58 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:56.972 16:18:58 -- target/dif.sh@82 -- # gen_fio_conf 00:22:56.972 16:18:58 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:56.972 16:18:58 -- nvmf/common.sh@521 -- # local subsystem config 00:22:56.972 16:18:58 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:56.972 16:18:58 -- target/dif.sh@54 -- # local file 00:22:56.972 16:18:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:56.972 16:18:58 -- common/autotest_common.sh@1327 -- # shift 00:22:56.972 16:18:58 -- target/dif.sh@56 -- # cat 00:22:56.972 16:18:58 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:56.972 16:18:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:56.972 { 00:22:56.972 "params": { 00:22:56.972 "name": "Nvme$subsystem", 00:22:56.972 "trtype": "$TEST_TRANSPORT", 00:22:56.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.972 "adrfam": "ipv4", 00:22:56.972 "trsvcid": "$NVMF_PORT", 00:22:56.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.972 "hdgst": ${hdgst:-false}, 00:22:56.972 "ddgst": ${ddgst:-false} 00:22:56.972 }, 00:22:56.972 "method": "bdev_nvme_attach_controller" 00:22:56.972 } 00:22:56.972 EOF 00:22:56.972 )") 00:22:56.972 16:18:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:56.973 16:18:58 -- nvmf/common.sh@543 -- # cat 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:56.973 16:18:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:56.973 16:18:58 -- target/dif.sh@72 -- # (( file <= files )) 00:22:56.973 16:18:58 -- nvmf/common.sh@545 -- # jq . 00:22:56.973 16:18:58 -- nvmf/common.sh@546 -- # IFS=, 00:22:56.973 16:18:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:56.973 "params": { 00:22:56.973 "name": "Nvme0", 00:22:56.973 "trtype": "tcp", 00:22:56.973 "traddr": "10.0.0.2", 00:22:56.973 "adrfam": "ipv4", 00:22:56.973 "trsvcid": "4420", 00:22:56.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:56.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:56.973 "hdgst": true, 00:22:56.973 "ddgst": true 00:22:56.973 }, 00:22:56.973 "method": "bdev_nvme_attach_controller" 00:22:56.973 }' 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:56.973 16:18:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:56.973 16:18:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:56.973 16:18:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:56.973 16:18:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:56.973 16:18:58 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:56.973 16:18:58 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.229 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:57.229 ... 00:22:57.229 fio-3.35 00:22:57.229 Starting 3 threads 00:22:57.229 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.417 00:23:09.417 filename0: (groupid=0, jobs=1): err= 0: pid=3495457: Wed Apr 24 16:19:08 2024 00:23:09.417 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(235MiB/10048msec) 00:23:09.417 slat (nsec): min=4730, max=34624, avg=13065.65, stdev=3100.57 00:23:09.417 clat (usec): min=9371, max=60121, avg=16020.16, stdev=5498.48 00:23:09.417 lat (usec): min=9383, max=60135, avg=16033.22, stdev=5498.54 00:23:09.417 clat percentiles (usec): 00:23:09.417 | 1.00th=[11600], 5.00th=[13173], 10.00th=[13829], 20.00th=[14353], 00:23:09.417 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:23:09.417 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:23:09.417 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58983], 99.95th=[60031], 00:23:09.417 | 99.99th=[60031] 00:23:09.417 bw ( KiB/s): min=20992, max=26624, per=32.94%, avg=23987.20, stdev=1593.53, samples=20 00:23:09.417 iops : min= 164, max= 208, avg=187.40, stdev=12.45, samples=20 00:23:09.417 lat (msec) : 10=0.11%, 20=98.03%, 50=0.21%, 100=1.65% 00:23:09.417 cpu : usr=90.24%, sys=9.31%, ctx=22, majf=0, minf=75 00:23:09.417 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.417 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.417 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.417 filename0: (groupid=0, jobs=1): err= 0: pid=3495458: Wed Apr 24 16:19:08 2024 00:23:09.417 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10047msec) 00:23:09.417 slat (nsec): min=7548, max=62136, avg=12976.29, stdev=3550.77 00:23:09.417 clat (usec): min=9353, max=57855, avg=15339.13, stdev=2606.18 00:23:09.417 lat (usec): min=9365, max=57867, avg=15352.11, stdev=2606.29 00:23:09.417 clat percentiles (usec): 00:23:09.417 | 1.00th=[10159], 5.00th=[11469], 10.00th=[13304], 20.00th=[14222], 00:23:09.417 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:23:09.417 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:23:09.417 | 99.00th=[18482], 99.50th=[19006], 99.90th=[57934], 99.95th=[57934], 00:23:09.417 | 99.99th=[57934] 00:23:09.417 bw ( KiB/s): min=23599, max=26368, per=34.42%, avg=25064.75, stdev=708.83, samples=20 00:23:09.417 iops : min= 184, max= 206, avg=195.80, stdev= 5.58, samples=20 00:23:09.417 lat (msec) : 10=0.71%, 20=98.98%, 50=0.10%, 100=0.20% 00:23:09.417 cpu : usr=90.16%, sys=9.38%, ctx=21, majf=0, minf=139 00:23:09.417 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.418 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.418 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.418 filename0: (groupid=0, jobs=1): err= 0: pid=3495459: Wed Apr 24 16:19:08 2024 00:23:09.418 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(235MiB/10044msec) 00:23:09.418 slat (nsec): min=4468, max=35792, avg=13484.66, stdev=3164.00 00:23:09.418 clat (usec): min=9651, max=58927, avg=15989.14, stdev=3512.77 00:23:09.418 lat (usec): min=9663, max=58941, avg=16002.62, stdev=3512.79 00:23:09.418 clat percentiles (usec): 00:23:09.418 | 1.00th=[10814], 5.00th=[12125], 10.00th=[13829], 20.00th=[14746], 00:23:09.418 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:23:09.418 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:23:09.418 | 99.00th=[19530], 99.50th=[53740], 99.90th=[58983], 99.95th=[58983], 00:23:09.418 | 99.99th=[58983] 00:23:09.418 bw ( KiB/s): min=21760, max=26368, per=33.00%, avg=24036.10, stdev=1236.96, samples=20 00:23:09.418 iops : min= 170, max= 206, avg=187.75, stdev= 9.70, samples=20 00:23:09.418 lat (msec) : 10=0.05%, 20=99.04%, 50=0.32%, 100=0.59% 00:23:09.418 cpu : usr=90.35%, sys=9.18%, ctx=23, majf=0, minf=92 00:23:09.418 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.418 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.418 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.418 00:23:09.418 Run status group 0 (all jobs): 00:23:09.418 READ: bw=71.1MiB/s (74.6MB/s), 23.3MiB/s-24.4MiB/s (24.5MB/s-25.6MB/s), io=715MiB (749MB), run=10044-10048msec 00:23:09.418 16:19:09 -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:09.418 16:19:09 -- target/dif.sh@43 -- # local sub 00:23:09.418 16:19:09 -- target/dif.sh@45 -- # for sub in "$@" 00:23:09.418 16:19:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:09.418 16:19:09 -- target/dif.sh@36 -- # local sub_id=0 00:23:09.418 16:19:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.418 16:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.418 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.418 16:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.418 16:19:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:09.418 16:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.418 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.418 16:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.418 00:23:09.418 real 0m11.209s 00:23:09.418 user 0m28.348s 00:23:09.418 sys 0m3.084s 00:23:09.418 16:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.418 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.418 ************************************ 00:23:09.418 END TEST fio_dif_digest 00:23:09.418 ************************************ 00:23:09.418 16:19:09 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:09.418 16:19:09 -- target/dif.sh@147 -- # nvmftestfini 00:23:09.418 16:19:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:09.418 16:19:09 -- nvmf/common.sh@117 -- # sync 00:23:09.418 16:19:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.418 16:19:09 -- nvmf/common.sh@120 -- # set +e 00:23:09.418 16:19:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.418 16:19:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.418 rmmod nvme_tcp 00:23:09.418 rmmod nvme_fabrics 00:23:09.418 rmmod nvme_keyring 00:23:09.418 16:19:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.418 16:19:09 -- nvmf/common.sh@124 -- # set -e 00:23:09.418 16:19:09 -- nvmf/common.sh@125 -- # return 0 00:23:09.418 16:19:09 -- nvmf/common.sh@478 -- # '[' -n 3489243 ']' 00:23:09.418 16:19:09 -- nvmf/common.sh@479 -- # killprocess 3489243 00:23:09.418 16:19:09 -- common/autotest_common.sh@936 -- # '[' -z 3489243 ']' 00:23:09.418 16:19:09 -- common/autotest_common.sh@940 -- # kill -0 3489243 00:23:09.418 16:19:09 -- common/autotest_common.sh@941 -- # uname 00:23:09.418 16:19:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.418 16:19:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3489243 00:23:09.418 16:19:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.418 16:19:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.418 16:19:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3489243' 00:23:09.418 killing process with pid 3489243 00:23:09.418 16:19:09 -- common/autotest_common.sh@955 -- # kill 3489243 00:23:09.418 16:19:09 -- common/autotest_common.sh@960 -- # wait 3489243 00:23:09.418 16:19:09 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:09.418 16:19:09 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:09.676 Waiting for block devices as requested 00:23:09.676 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:09.676 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:09.676 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:09.933 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:09.933 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:09.933 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:09.933 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:10.191 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:10.191 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:10.191 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:10.449 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:10.449 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:10.449 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:10.449 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:10.707 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:10.707 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:10.707 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:10.966 16:19:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:10.966 16:19:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:10.966 16:19:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.966 16:19:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.966 16:19:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.966 16:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:10.966 16:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.945 16:19:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.945 00:23:12.945 real 1m7.105s 00:23:12.945 user 6m25.632s 00:23:12.945 sys 0m21.371s 00:23:12.945 16:19:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:12.945 16:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:12.945 ************************************ 00:23:12.945 END TEST nvmf_dif 00:23:12.945 ************************************ 00:23:12.945 16:19:14 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:12.945 16:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:12.945 16:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:12.945 16:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:12.945 ************************************ 00:23:12.945 START TEST nvmf_abort_qd_sizes 00:23:12.945 ************************************ 00:23:12.945 16:19:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:12.945 * Looking for test storage... 00:23:12.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.945 16:19:14 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.945 16:19:14 -- nvmf/common.sh@7 -- # uname -s 00:23:12.945 16:19:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.945 16:19:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.945 16:19:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.945 16:19:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.945 16:19:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.945 16:19:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.945 16:19:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.945 16:19:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.945 16:19:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.945 16:19:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.210 16:19:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.210 16:19:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.210 16:19:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.210 16:19:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.210 16:19:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.210 16:19:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.210 16:19:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.210 16:19:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.210 16:19:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.210 16:19:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.210 16:19:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.210 16:19:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.210 16:19:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.210 16:19:14 -- paths/export.sh@5 -- # export PATH 00:23:13.210 16:19:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.210 16:19:14 -- nvmf/common.sh@47 -- # : 0 00:23:13.210 16:19:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.210 16:19:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.210 16:19:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.210 16:19:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.210 16:19:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.210 16:19:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.210 16:19:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.210 16:19:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.210 16:19:14 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:13.210 16:19:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:13.210 16:19:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.210 16:19:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:13.210 16:19:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:13.210 16:19:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:13.210 16:19:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.210 16:19:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:13.210 16:19:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.210 16:19:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:13.210 16:19:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:13.210 16:19:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.210 16:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:15.109 16:19:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:15.109 16:19:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.109 16:19:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.109 16:19:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.109 16:19:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.109 16:19:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.109 16:19:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.109 16:19:16 -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.109 16:19:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.109 16:19:16 -- nvmf/common.sh@296 -- # e810=() 00:23:15.109 16:19:16 -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.110 16:19:16 -- nvmf/common.sh@297 -- # x722=() 00:23:15.110 16:19:16 -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.110 16:19:16 -- nvmf/common.sh@298 -- # mlx=() 00:23:15.110 16:19:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.110 16:19:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.110 16:19:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.110 16:19:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.110 16:19:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.110 16:19:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:15.110 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:15.110 16:19:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.110 16:19:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:15.110 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:15.110 16:19:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.110 16:19:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.110 16:19:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.110 16:19:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:15.110 Found net devices under 0000:09:00.0: cvl_0_0 00:23:15.110 16:19:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.110 16:19:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.110 16:19:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.110 16:19:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.110 16:19:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:15.110 Found net devices under 0000:09:00.1: cvl_0_1 00:23:15.110 16:19:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.110 16:19:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:15.110 16:19:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:15.110 16:19:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:15.110 16:19:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.110 16:19:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.110 16:19:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.110 16:19:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.110 16:19:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.110 16:19:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.110 16:19:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.110 16:19:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.110 16:19:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.110 16:19:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.110 16:19:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.110 16:19:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.110 16:19:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.110 16:19:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.110 16:19:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.110 16:19:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.368 16:19:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.368 16:19:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.368 16:19:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.368 16:19:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:23:15.368 00:23:15.368 --- 10.0.0.2 ping statistics --- 00:23:15.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.368 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:15.368 16:19:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:23:15.368 00:23:15.368 --- 10.0.0.1 ping statistics --- 00:23:15.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.368 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:15.368 16:19:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.368 16:19:16 -- nvmf/common.sh@411 -- # return 0 00:23:15.368 16:19:16 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:15.368 16:19:16 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:16.304 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:16.304 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:16.304 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:17.240 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:17.499 16:19:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.499 16:19:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:17.499 16:19:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:17.499 16:19:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.499 16:19:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:17.499 16:19:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:17.499 16:19:18 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:17.499 16:19:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:17.499 16:19:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:17.499 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.499 16:19:18 -- nvmf/common.sh@470 -- # nvmfpid=3500264 00:23:17.499 16:19:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:17.499 16:19:18 -- nvmf/common.sh@471 -- # waitforlisten 3500264 00:23:17.499 16:19:18 -- common/autotest_common.sh@817 -- # '[' -z 3500264 ']' 00:23:17.499 16:19:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.499 16:19:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.499 16:19:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.499 16:19:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.499 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.499 [2024-04-24 16:19:18.636450] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:23:17.499 [2024-04-24 16:19:18.636538] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.499 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.499 [2024-04-24 16:19:18.700977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.757 [2024-04-24 16:19:18.807994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.757 [2024-04-24 16:19:18.808043] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.757 [2024-04-24 16:19:18.808066] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.757 [2024-04-24 16:19:18.808078] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.757 [2024-04-24 16:19:18.808089] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.757 [2024-04-24 16:19:18.808170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.757 [2024-04-24 16:19:18.808226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.757 [2024-04-24 16:19:18.808292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.757 [2024-04-24 16:19:18.808295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.757 16:19:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.757 16:19:18 -- common/autotest_common.sh@850 -- # return 0 00:23:17.757 16:19:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:17.757 16:19:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:17.757 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.757 16:19:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:17.757 16:19:18 -- scripts/common.sh@309 -- # local bdf bdfs 00:23:17.757 16:19:18 -- scripts/common.sh@310 -- # local nvmes 00:23:17.757 16:19:18 -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:23:17.757 16:19:18 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:23:17.757 16:19:18 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:17.757 16:19:18 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:23:17.757 16:19:18 -- scripts/common.sh@320 -- # uname -s 00:23:17.757 16:19:18 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:17.757 16:19:18 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:17.757 16:19:18 -- scripts/common.sh@325 -- # (( 1 )) 00:23:17.757 16:19:18 -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:23:17.757 16:19:18 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:17.757 16:19:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:17.757 16:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:17.757 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:18.016 ************************************ 00:23:18.016 START TEST spdk_target_abort 00:23:18.016 ************************************ 00:23:18.016 16:19:19 -- common/autotest_common.sh@1111 -- # spdk_target 00:23:18.016 16:19:19 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:18.016 16:19:19 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:23:18.016 16:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.016 16:19:19 -- common/autotest_common.sh@10 -- # set +x 00:23:21.303 spdk_targetn1 00:23:21.303 16:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.303 16:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.303 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.303 [2024-04-24 16:19:21.924758] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.303 16:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:21.303 16:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.303 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.303 16:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:21.303 16:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.303 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.303 16:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:21.303 16:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.303 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.303 [2024-04-24 16:19:21.957048] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.303 16:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:21.303 16:19:21 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:21.303 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.835 Initializing NVMe Controllers 00:23:23.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:23.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:23.835 Initialization complete. Launching workers. 00:23:23.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10658, failed: 0 00:23:23.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1328, failed to submit 9330 00:23:23.835 success 808, unsuccess 520, failed 0 00:23:23.835 16:19:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:23.835 16:19:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:24.092 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.370 Initializing NVMe Controllers 00:23:27.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:27.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:27.370 Initialization complete. Launching workers. 00:23:27.370 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8885, failed: 0 00:23:27.370 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7626 00:23:27.370 success 278, unsuccess 981, failed 0 00:23:27.370 16:19:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:27.370 16:19:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:27.370 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.651 Initializing NVMe Controllers 00:23:30.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:30.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:30.651 Initialization complete. Launching workers. 00:23:30.651 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30421, failed: 0 00:23:30.651 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2772, failed to submit 27649 00:23:30.651 success 530, unsuccess 2242, failed 0 00:23:30.651 16:19:31 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:30.651 16:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.651 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:23:30.651 16:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.651 16:19:31 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:30.651 16:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.651 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:23:31.584 16:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.584 16:19:32 -- target/abort_qd_sizes.sh@61 -- # killprocess 3500264 00:23:31.584 16:19:32 -- common/autotest_common.sh@936 -- # '[' -z 3500264 ']' 00:23:31.584 16:19:32 -- common/autotest_common.sh@940 -- # kill -0 3500264 00:23:31.584 16:19:32 -- common/autotest_common.sh@941 -- # uname 00:23:31.584 16:19:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.584 16:19:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3500264 00:23:31.584 16:19:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:31.584 16:19:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:31.584 16:19:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3500264' 00:23:31.584 killing process with pid 3500264 00:23:31.584 16:19:32 -- common/autotest_common.sh@955 -- # kill 3500264 00:23:31.584 16:19:32 -- common/autotest_common.sh@960 -- # wait 3500264 00:23:31.846 00:23:31.846 real 0m13.968s 00:23:31.846 user 0m52.165s 00:23:31.846 sys 0m2.907s 00:23:31.846 16:19:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:31.846 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.846 ************************************ 00:23:31.846 END TEST spdk_target_abort 00:23:31.846 ************************************ 00:23:31.846 16:19:33 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:31.846 16:19:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:31.846 16:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:31.846 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:32.106 ************************************ 00:23:32.106 START TEST kernel_target_abort 00:23:32.106 ************************************ 00:23:32.106 16:19:33 -- common/autotest_common.sh@1111 -- # kernel_target 00:23:32.106 16:19:33 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:32.106 16:19:33 -- nvmf/common.sh@717 -- # local ip 00:23:32.106 16:19:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:32.106 16:19:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:32.106 16:19:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.106 16:19:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.106 16:19:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:32.107 16:19:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.107 16:19:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:32.107 16:19:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:32.107 16:19:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:32.107 16:19:33 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:32.107 16:19:33 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:32.107 16:19:33 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:32.107 16:19:33 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:32.107 16:19:33 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:32.107 16:19:33 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:32.107 16:19:33 -- nvmf/common.sh@628 -- # local block nvme 00:23:32.107 16:19:33 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:32.107 16:19:33 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:32.107 16:19:33 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:32.107 16:19:33 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:33.042 Waiting for block devices as requested 00:23:33.042 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:33.042 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:33.300 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:33.300 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:33.300 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:33.300 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:33.559 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:33.559 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:33.559 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:33.559 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:33.817 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:33.817 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:33.817 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:33.817 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:34.075 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:34.075 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:34.075 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:34.334 16:19:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:34.334 16:19:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:34.334 16:19:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:34.334 16:19:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:34.334 16:19:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:34.334 16:19:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:34.334 16:19:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:34.334 16:19:35 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:34.334 16:19:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:34.334 No valid GPT data, bailing 00:23:34.334 16:19:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:34.334 16:19:35 -- scripts/common.sh@391 -- # pt= 00:23:34.334 16:19:35 -- scripts/common.sh@392 -- # return 1 00:23:34.334 16:19:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:34.334 16:19:35 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:34.334 16:19:35 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:34.334 16:19:35 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:34.334 16:19:35 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:34.334 16:19:35 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:34.334 16:19:35 -- nvmf/common.sh@656 -- # echo 1 00:23:34.334 16:19:35 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:34.334 16:19:35 -- nvmf/common.sh@658 -- # echo 1 00:23:34.334 16:19:35 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:34.334 16:19:35 -- nvmf/common.sh@661 -- # echo tcp 00:23:34.334 16:19:35 -- nvmf/common.sh@662 -- # echo 4420 00:23:34.334 16:19:35 -- nvmf/common.sh@663 -- # echo ipv4 00:23:34.335 16:19:35 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:34.335 16:19:35 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:34.335 00:23:34.335 Discovery Log Number of Records 2, Generation counter 2 00:23:34.335 =====Discovery Log Entry 0====== 00:23:34.335 trtype: tcp 00:23:34.335 adrfam: ipv4 00:23:34.335 subtype: current discovery subsystem 00:23:34.335 treq: not specified, sq flow control disable supported 00:23:34.335 portid: 1 00:23:34.335 trsvcid: 4420 00:23:34.335 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:34.335 traddr: 10.0.0.1 00:23:34.335 eflags: none 00:23:34.335 sectype: none 00:23:34.335 =====Discovery Log Entry 1====== 00:23:34.335 trtype: tcp 00:23:34.335 adrfam: ipv4 00:23:34.335 subtype: nvme subsystem 00:23:34.335 treq: not specified, sq flow control disable supported 00:23:34.335 portid: 1 00:23:34.335 trsvcid: 4420 00:23:34.335 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:34.335 traddr: 10.0.0.1 00:23:34.335 eflags: none 00:23:34.335 sectype: none 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:34.335 16:19:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:34.335 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.613 Initializing NVMe Controllers 00:23:37.613 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:37.613 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:37.613 Initialization complete. Launching workers. 00:23:37.613 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37248, failed: 0 00:23:37.613 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37248, failed to submit 0 00:23:37.613 success 0, unsuccess 37248, failed 0 00:23:37.613 16:19:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:37.613 16:19:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:37.613 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.892 Initializing NVMe Controllers 00:23:40.892 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:40.892 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:40.892 Initialization complete. Launching workers. 00:23:40.892 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71762, failed: 0 00:23:40.892 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18086, failed to submit 53676 00:23:40.892 success 0, unsuccess 18086, failed 0 00:23:40.892 16:19:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:40.892 16:19:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:40.892 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.421 Initializing NVMe Controllers 00:23:43.421 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:43.421 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:43.421 Initialization complete. Launching workers. 00:23:43.421 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72972, failed: 0 00:23:43.421 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18198, failed to submit 54774 00:23:43.421 success 0, unsuccess 18198, failed 0 00:23:43.421 16:19:44 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:43.421 16:19:44 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:43.421 16:19:44 -- nvmf/common.sh@675 -- # echo 0 00:23:43.681 16:19:44 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.681 16:19:44 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.681 16:19:44 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:43.681 16:19:44 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.681 16:19:44 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:43.681 16:19:44 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:43.681 16:19:44 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:44.615 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:44.615 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:44.615 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:45.551 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:45.811 00:23:45.811 real 0m13.721s 00:23:45.811 user 0m5.593s 00:23:45.811 sys 0m3.061s 00:23:45.811 16:19:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:45.811 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:23:45.811 ************************************ 00:23:45.811 END TEST kernel_target_abort 00:23:45.811 ************************************ 00:23:45.811 16:19:46 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:45.811 16:19:46 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:45.811 16:19:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:45.811 16:19:46 -- nvmf/common.sh@117 -- # sync 00:23:45.811 16:19:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.811 16:19:46 -- nvmf/common.sh@120 -- # set +e 00:23:45.811 16:19:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.811 16:19:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.811 rmmod nvme_tcp 00:23:45.811 rmmod nvme_fabrics 00:23:45.811 rmmod nvme_keyring 00:23:45.811 16:19:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.811 16:19:46 -- nvmf/common.sh@124 -- # set -e 00:23:45.811 16:19:46 -- nvmf/common.sh@125 -- # return 0 00:23:45.811 16:19:46 -- nvmf/common.sh@478 -- # '[' -n 3500264 ']' 00:23:45.811 16:19:46 -- nvmf/common.sh@479 -- # killprocess 3500264 00:23:45.811 16:19:46 -- common/autotest_common.sh@936 -- # '[' -z 3500264 ']' 00:23:45.811 16:19:46 -- common/autotest_common.sh@940 -- # kill -0 3500264 00:23:45.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3500264) - No such process 00:23:45.811 16:19:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3500264 is not found' 00:23:45.811 Process with pid 3500264 is not found 00:23:45.811 16:19:46 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:45.811 16:19:46 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:46.745 Waiting for block devices as requested 00:23:46.745 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.004 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.004 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.004 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.004 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.262 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.262 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:47.262 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:47.262 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.521 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.521 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.521 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.521 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.779 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.779 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.779 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:48.036 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:48.036 16:19:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:48.036 16:19:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:48.036 16:19:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.036 16:19:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.036 16:19:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.036 16:19:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:48.036 16:19:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.973 16:19:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.973 00:23:49.973 real 0m37.016s 00:23:49.973 user 0m59.821s 00:23:49.973 sys 0m9.298s 00:23:49.973 16:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:49.973 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:23:49.973 ************************************ 00:23:49.973 END TEST nvmf_abort_qd_sizes 00:23:49.973 ************************************ 00:23:49.973 16:19:51 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:49.973 16:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:49.973 16:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:49.973 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:23:50.238 ************************************ 00:23:50.238 START TEST keyring_file 00:23:50.238 ************************************ 00:23:50.238 16:19:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:50.238 * Looking for test storage... 00:23:50.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:50.238 16:19:51 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:50.238 16:19:51 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.238 16:19:51 -- nvmf/common.sh@7 -- # uname -s 00:23:50.238 16:19:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.238 16:19:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.238 16:19:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.238 16:19:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.238 16:19:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.238 16:19:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.238 16:19:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.238 16:19:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.238 16:19:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.238 16:19:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.238 16:19:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:50.238 16:19:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:50.238 16:19:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.238 16:19:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.238 16:19:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.238 16:19:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.238 16:19:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.238 16:19:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.238 16:19:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.238 16:19:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.238 16:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.238 16:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.238 16:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.238 16:19:51 -- paths/export.sh@5 -- # export PATH 00:23:50.238 16:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.238 16:19:51 -- nvmf/common.sh@47 -- # : 0 00:23:50.238 16:19:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.238 16:19:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.238 16:19:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.238 16:19:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.238 16:19:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.238 16:19:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.238 16:19:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.238 16:19:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.238 16:19:51 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:50.238 16:19:51 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:50.238 16:19:51 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:50.238 16:19:51 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:50.238 16:19:51 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:50.238 16:19:51 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:50.238 16:19:51 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:50.238 16:19:51 -- keyring/common.sh@15 -- # local name key digest path 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # name=key0 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # digest=0 00:23:50.238 16:19:51 -- keyring/common.sh@18 -- # mktemp 00:23:50.238 16:19:51 -- keyring/common.sh@18 -- # path=/tmp/tmp.gBaq7WOM8r 00:23:50.238 16:19:51 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:50.238 16:19:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:50.238 16:19:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # digest=0 00:23:50.238 16:19:51 -- nvmf/common.sh@694 -- # python - 00:23:50.238 16:19:51 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gBaq7WOM8r 00:23:50.238 16:19:51 -- keyring/common.sh@23 -- # echo /tmp/tmp.gBaq7WOM8r 00:23:50.238 16:19:51 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gBaq7WOM8r 00:23:50.238 16:19:51 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:50.238 16:19:51 -- keyring/common.sh@15 -- # local name key digest path 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # name=key1 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:50.238 16:19:51 -- keyring/common.sh@17 -- # digest=0 00:23:50.238 16:19:51 -- keyring/common.sh@18 -- # mktemp 00:23:50.238 16:19:51 -- keyring/common.sh@18 -- # path=/tmp/tmp.tO4nRTcFK6 00:23:50.238 16:19:51 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:50.238 16:19:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:50.238 16:19:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:50.238 16:19:51 -- nvmf/common.sh@693 -- # digest=0 00:23:50.238 16:19:51 -- nvmf/common.sh@694 -- # python - 00:23:50.238 16:19:51 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tO4nRTcFK6 00:23:50.238 16:19:51 -- keyring/common.sh@23 -- # echo /tmp/tmp.tO4nRTcFK6 00:23:50.238 16:19:51 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tO4nRTcFK6 00:23:50.238 16:19:51 -- keyring/file.sh@30 -- # tgtpid=3506027 00:23:50.238 16:19:51 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:50.238 16:19:51 -- keyring/file.sh@32 -- # waitforlisten 3506027 00:23:50.238 16:19:51 -- common/autotest_common.sh@817 -- # '[' -z 3506027 ']' 00:23:50.238 16:19:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.238 16:19:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:50.238 16:19:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.238 16:19:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:50.238 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:23:50.238 [2024-04-24 16:19:51.492552] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:23:50.238 [2024-04-24 16:19:51.492629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506027 ] 00:23:50.238 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.497 [2024-04-24 16:19:51.556252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.497 [2024-04-24 16:19:51.671281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.436 16:19:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.436 16:19:52 -- common/autotest_common.sh@850 -- # return 0 00:23:51.436 16:19:52 -- keyring/file.sh@33 -- # rpc_cmd 00:23:51.436 16:19:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.436 16:19:52 -- common/autotest_common.sh@10 -- # set +x 00:23:51.436 [2024-04-24 16:19:52.427738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.436 null0 00:23:51.436 [2024-04-24 16:19:52.459812] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.436 [2024-04-24 16:19:52.460258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:51.437 [2024-04-24 16:19:52.467841] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:51.437 16:19:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.437 16:19:52 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:51.437 16:19:52 -- common/autotest_common.sh@638 -- # local es=0 00:23:51.437 16:19:52 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:51.437 16:19:52 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:51.437 16:19:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:51.437 16:19:52 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:51.437 16:19:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:51.437 16:19:52 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:51.437 16:19:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.437 16:19:52 -- common/autotest_common.sh@10 -- # set +x 00:23:51.437 [2024-04-24 16:19:52.475845] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:51.437 { 00:23:51.437 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.437 "secure_channel": false, 00:23:51.437 "listen_address": { 00:23:51.437 "trtype": "tcp", 00:23:51.437 "traddr": "127.0.0.1", 00:23:51.437 "trsvcid": "4420" 00:23:51.437 }, 00:23:51.437 "method": "nvmf_subsystem_add_listener", 00:23:51.437 "req_id": 1 00:23:51.437 } 00:23:51.437 Got JSON-RPC error response 00:23:51.437 response: 00:23:51.437 { 00:23:51.437 "code": -32602, 00:23:51.437 "message": "Invalid parameters" 00:23:51.437 } 00:23:51.437 16:19:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:51.437 16:19:52 -- common/autotest_common.sh@641 -- # es=1 00:23:51.437 16:19:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:51.437 16:19:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:51.437 16:19:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:51.437 16:19:52 -- keyring/file.sh@46 -- # bperfpid=3506066 00:23:51.437 16:19:52 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:51.437 16:19:52 -- keyring/file.sh@48 -- # waitforlisten 3506066 /var/tmp/bperf.sock 00:23:51.437 16:19:52 -- common/autotest_common.sh@817 -- # '[' -z 3506066 ']' 00:23:51.437 16:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:51.437 16:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:51.437 16:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:51.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:51.437 16:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:51.437 16:19:52 -- common/autotest_common.sh@10 -- # set +x 00:23:51.437 [2024-04-24 16:19:52.523379] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:23:51.437 [2024-04-24 16:19:52.523448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506066 ] 00:23:51.437 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.437 [2024-04-24 16:19:52.587282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.437 [2024-04-24 16:19:52.700427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.695 16:19:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.695 16:19:52 -- common/autotest_common.sh@850 -- # return 0 00:23:51.695 16:19:52 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:51.695 16:19:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:51.953 16:19:53 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tO4nRTcFK6 00:23:51.953 16:19:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tO4nRTcFK6 00:23:52.211 16:19:53 -- keyring/file.sh@51 -- # get_key key0 00:23:52.211 16:19:53 -- keyring/file.sh@51 -- # jq -r .path 00:23:52.211 16:19:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.211 16:19:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.211 16:19:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:52.470 16:19:53 -- keyring/file.sh@51 -- # [[ /tmp/tmp.gBaq7WOM8r == \/\t\m\p\/\t\m\p\.\g\B\a\q\7\W\O\M\8\r ]] 00:23:52.470 16:19:53 -- keyring/file.sh@52 -- # get_key key1 00:23:52.470 16:19:53 -- keyring/file.sh@52 -- # jq -r .path 00:23:52.470 16:19:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.470 16:19:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.470 16:19:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:52.727 16:19:53 -- keyring/file.sh@52 -- # [[ /tmp/tmp.tO4nRTcFK6 == \/\t\m\p\/\t\m\p\.\t\O\4\n\R\T\c\F\K\6 ]] 00:23:52.727 16:19:53 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:52.727 16:19:53 -- keyring/common.sh@12 -- # get_key key0 00:23:52.727 16:19:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:52.727 16:19:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.727 16:19:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.727 16:19:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:52.987 16:19:54 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:52.987 16:19:54 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:52.987 16:19:54 -- keyring/common.sh@12 -- # get_key key1 00:23:52.987 16:19:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:52.987 16:19:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.987 16:19:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:52.987 16:19:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.987 16:19:54 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:52.987 16:19:54 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:52.987 16:19:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:53.246 [2024-04-24 16:19:54.484065] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.505 nvme0n1 00:23:53.505 16:19:54 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:53.505 16:19:54 -- keyring/common.sh@12 -- # get_key key0 00:23:53.505 16:19:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:53.505 16:19:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:53.505 16:19:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:53.505 16:19:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:53.763 16:19:54 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:53.763 16:19:54 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:53.763 16:19:54 -- keyring/common.sh@12 -- # get_key key1 00:23:53.763 16:19:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:53.763 16:19:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:53.763 16:19:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:53.763 16:19:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:54.023 16:19:55 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:54.023 16:19:55 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.023 Running I/O for 1 seconds... 00:23:54.957 00:23:54.957 Latency(us) 00:23:54.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.957 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:54.957 nvme0n1 : 1.01 6947.97 27.14 0.00 0.00 18275.32 5995.33 26602.76 00:23:54.957 =================================================================================================================== 00:23:54.957 Total : 6947.97 27.14 0.00 0.00 18275.32 5995.33 26602.76 00:23:54.957 0 00:23:54.957 16:19:56 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:54.957 16:19:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:55.213 16:19:56 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:55.213 16:19:56 -- keyring/common.sh@12 -- # get_key key0 00:23:55.213 16:19:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:55.213 16:19:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.213 16:19:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:55.213 16:19:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.469 16:19:56 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:55.469 16:19:56 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:55.469 16:19:56 -- keyring/common.sh@12 -- # get_key key1 00:23:55.469 16:19:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:55.469 16:19:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.469 16:19:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.469 16:19:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:55.727 16:19:56 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:55.727 16:19:56 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:55.727 16:19:56 -- common/autotest_common.sh@638 -- # local es=0 00:23:55.727 16:19:56 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:55.727 16:19:56 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:55.727 16:19:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:55.727 16:19:56 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:55.727 16:19:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:55.727 16:19:56 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:55.727 16:19:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:55.985 [2024-04-24 16:19:57.173928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.985 [2024-04-24 16:19:57.174514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bf2b0 (107): Transport endpoint is not connected 00:23:55.985 [2024-04-24 16:19:57.175506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bf2b0 (9): Bad file descriptor 00:23:55.985 [2024-04-24 16:19:57.176505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:55.985 [2024-04-24 16:19:57.176526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:55.985 [2024-04-24 16:19:57.176539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:55.985 request: 00:23:55.985 { 00:23:55.985 "name": "nvme0", 00:23:55.985 "trtype": "tcp", 00:23:55.985 "traddr": "127.0.0.1", 00:23:55.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:55.985 "adrfam": "ipv4", 00:23:55.985 "trsvcid": "4420", 00:23:55.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:55.985 "psk": "key1", 00:23:55.985 "method": "bdev_nvme_attach_controller", 00:23:55.985 "req_id": 1 00:23:55.985 } 00:23:55.985 Got JSON-RPC error response 00:23:55.985 response: 00:23:55.985 { 00:23:55.985 "code": -32602, 00:23:55.985 "message": "Invalid parameters" 00:23:55.985 } 00:23:55.985 16:19:57 -- common/autotest_common.sh@641 -- # es=1 00:23:55.985 16:19:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:55.985 16:19:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:55.985 16:19:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:55.985 16:19:57 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:55.985 16:19:57 -- keyring/common.sh@12 -- # get_key key0 00:23:55.985 16:19:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:55.985 16:19:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.985 16:19:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.985 16:19:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:56.242 16:19:57 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:56.242 16:19:57 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:56.242 16:19:57 -- keyring/common.sh@12 -- # get_key key1 00:23:56.242 16:19:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:56.242 16:19:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:56.242 16:19:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:56.242 16:19:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:56.498 16:19:57 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:56.498 16:19:57 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:56.498 16:19:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:56.756 16:19:57 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:56.756 16:19:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:57.014 16:19:58 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:57.014 16:19:58 -- keyring/file.sh@77 -- # jq length 00:23:57.014 16:19:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.271 16:19:58 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:57.271 16:19:58 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gBaq7WOM8r 00:23:57.271 16:19:58 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.271 16:19:58 -- common/autotest_common.sh@638 -- # local es=0 00:23:57.271 16:19:58 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.271 16:19:58 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:57.271 16:19:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:57.271 16:19:58 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:57.271 16:19:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:57.271 16:19:58 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.271 16:19:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.528 [2024-04-24 16:19:58.638522] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gBaq7WOM8r': 0100660 00:23:57.528 [2024-04-24 16:19:58.638560] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:57.528 request: 00:23:57.528 { 00:23:57.528 "name": "key0", 00:23:57.528 "path": "/tmp/tmp.gBaq7WOM8r", 00:23:57.528 "method": "keyring_file_add_key", 00:23:57.528 "req_id": 1 00:23:57.528 } 00:23:57.528 Got JSON-RPC error response 00:23:57.528 response: 00:23:57.528 { 00:23:57.528 "code": -1, 00:23:57.528 "message": "Operation not permitted" 00:23:57.528 } 00:23:57.528 16:19:58 -- common/autotest_common.sh@641 -- # es=1 00:23:57.528 16:19:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:57.528 16:19:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:57.528 16:19:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:57.528 16:19:58 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gBaq7WOM8r 00:23:57.528 16:19:58 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.528 16:19:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gBaq7WOM8r 00:23:57.786 16:19:58 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gBaq7WOM8r 00:23:57.786 16:19:58 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:57.786 16:19:58 -- keyring/common.sh@12 -- # get_key key0 00:23:57.786 16:19:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:57.786 16:19:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.786 16:19:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.786 16:19:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:58.044 16:19:59 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:58.044 16:19:59 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.044 16:19:59 -- common/autotest_common.sh@638 -- # local es=0 00:23:58.044 16:19:59 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.044 16:19:59 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:58.044 16:19:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:58.044 16:19:59 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:58.044 16:19:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:58.044 16:19:59 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.044 16:19:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.302 [2024-04-24 16:19:59.372623] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gBaq7WOM8r': No such file or directory 00:23:58.302 [2024-04-24 16:19:59.372661] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:58.302 [2024-04-24 16:19:59.372702] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:58.302 [2024-04-24 16:19:59.372715] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.302 [2024-04-24 16:19:59.372728] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:58.302 request: 00:23:58.302 { 00:23:58.302 "name": "nvme0", 00:23:58.302 "trtype": "tcp", 00:23:58.302 "traddr": "127.0.0.1", 00:23:58.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.302 "adrfam": "ipv4", 00:23:58.302 "trsvcid": "4420", 00:23:58.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.302 "psk": "key0", 00:23:58.302 "method": "bdev_nvme_attach_controller", 00:23:58.302 "req_id": 1 00:23:58.302 } 00:23:58.302 Got JSON-RPC error response 00:23:58.302 response: 00:23:58.302 { 00:23:58.302 "code": -19, 00:23:58.302 "message": "No such device" 00:23:58.302 } 00:23:58.302 16:19:59 -- common/autotest_common.sh@641 -- # es=1 00:23:58.302 16:19:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:58.302 16:19:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:58.302 16:19:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:58.302 16:19:59 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:58.302 16:19:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:58.559 16:19:59 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:58.559 16:19:59 -- keyring/common.sh@15 -- # local name key digest path 00:23:58.559 16:19:59 -- keyring/common.sh@17 -- # name=key0 00:23:58.559 16:19:59 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:58.559 16:19:59 -- keyring/common.sh@17 -- # digest=0 00:23:58.559 16:19:59 -- keyring/common.sh@18 -- # mktemp 00:23:58.559 16:19:59 -- keyring/common.sh@18 -- # path=/tmp/tmp.LH17XwPoBi 00:23:58.559 16:19:59 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:58.559 16:19:59 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:58.559 16:19:59 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:58.559 16:19:59 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:58.559 16:19:59 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:58.559 16:19:59 -- nvmf/common.sh@693 -- # digest=0 00:23:58.559 16:19:59 -- nvmf/common.sh@694 -- # python - 00:23:58.559 16:19:59 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LH17XwPoBi 00:23:58.559 16:19:59 -- keyring/common.sh@23 -- # echo /tmp/tmp.LH17XwPoBi 00:23:58.559 16:19:59 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LH17XwPoBi 00:23:58.559 16:19:59 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LH17XwPoBi 00:23:58.559 16:19:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LH17XwPoBi 00:23:58.816 16:19:59 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.816 16:19:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:59.074 nvme0n1 00:23:59.074 16:20:00 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:59.074 16:20:00 -- keyring/common.sh@12 -- # get_key key0 00:23:59.074 16:20:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:59.074 16:20:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:59.074 16:20:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.074 16:20:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:59.332 16:20:00 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:59.332 16:20:00 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:59.332 16:20:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:59.590 16:20:00 -- keyring/file.sh@101 -- # get_key key0 00:23:59.590 16:20:00 -- keyring/file.sh@101 -- # jq -r .removed 00:23:59.590 16:20:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:59.590 16:20:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.590 16:20:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:59.846 16:20:00 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:59.846 16:20:00 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:59.846 16:20:00 -- keyring/common.sh@12 -- # get_key key0 00:23:59.846 16:20:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:59.846 16:20:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:59.846 16:20:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.846 16:20:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:00.101 16:20:01 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:00.101 16:20:01 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:00.101 16:20:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:00.439 16:20:01 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:00.439 16:20:01 -- keyring/file.sh@104 -- # jq length 00:24:00.439 16:20:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.439 16:20:01 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:00.439 16:20:01 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LH17XwPoBi 00:24:00.439 16:20:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LH17XwPoBi 00:24:00.696 16:20:01 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tO4nRTcFK6 00:24:00.696 16:20:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tO4nRTcFK6 00:24:00.954 16:20:02 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:00.954 16:20:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.213 nvme0n1 00:24:01.473 16:20:02 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:01.473 16:20:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:01.733 16:20:02 -- keyring/file.sh@112 -- # config='{ 00:24:01.733 "subsystems": [ 00:24:01.733 { 00:24:01.733 "subsystem": "keyring", 00:24:01.733 "config": [ 00:24:01.733 { 00:24:01.733 "method": "keyring_file_add_key", 00:24:01.733 "params": { 00:24:01.733 "name": "key0", 00:24:01.733 "path": "/tmp/tmp.LH17XwPoBi" 00:24:01.733 } 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "method": "keyring_file_add_key", 00:24:01.733 "params": { 00:24:01.733 "name": "key1", 00:24:01.733 "path": "/tmp/tmp.tO4nRTcFK6" 00:24:01.733 } 00:24:01.733 } 00:24:01.733 ] 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "subsystem": "iobuf", 00:24:01.733 "config": [ 00:24:01.733 { 00:24:01.733 "method": "iobuf_set_options", 00:24:01.733 "params": { 00:24:01.733 "small_pool_count": 8192, 00:24:01.733 "large_pool_count": 1024, 00:24:01.733 "small_bufsize": 8192, 00:24:01.733 "large_bufsize": 135168 00:24:01.733 } 00:24:01.733 } 00:24:01.733 ] 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "subsystem": "sock", 00:24:01.733 "config": [ 00:24:01.733 { 00:24:01.733 "method": "sock_impl_set_options", 00:24:01.733 "params": { 00:24:01.733 "impl_name": "posix", 00:24:01.733 "recv_buf_size": 2097152, 00:24:01.733 "send_buf_size": 2097152, 00:24:01.733 "enable_recv_pipe": true, 00:24:01.733 "enable_quickack": false, 00:24:01.733 "enable_placement_id": 0, 00:24:01.733 "enable_zerocopy_send_server": true, 00:24:01.733 "enable_zerocopy_send_client": false, 00:24:01.733 "zerocopy_threshold": 0, 00:24:01.733 "tls_version": 0, 00:24:01.733 "enable_ktls": false 00:24:01.733 } 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "method": "sock_impl_set_options", 00:24:01.733 "params": { 00:24:01.733 "impl_name": "ssl", 00:24:01.733 "recv_buf_size": 4096, 00:24:01.733 "send_buf_size": 4096, 00:24:01.733 "enable_recv_pipe": true, 00:24:01.733 "enable_quickack": false, 00:24:01.733 "enable_placement_id": 0, 00:24:01.733 "enable_zerocopy_send_server": true, 00:24:01.733 "enable_zerocopy_send_client": false, 00:24:01.733 "zerocopy_threshold": 0, 00:24:01.733 "tls_version": 0, 00:24:01.733 "enable_ktls": false 00:24:01.733 } 00:24:01.733 } 00:24:01.733 ] 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "subsystem": "vmd", 00:24:01.733 "config": [] 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "subsystem": "accel", 00:24:01.733 "config": [ 00:24:01.733 { 00:24:01.733 "method": "accel_set_options", 00:24:01.733 "params": { 00:24:01.733 "small_cache_size": 128, 00:24:01.733 "large_cache_size": 16, 00:24:01.733 "task_count": 2048, 00:24:01.733 "sequence_count": 2048, 00:24:01.733 "buf_count": 2048 00:24:01.733 } 00:24:01.733 } 00:24:01.733 ] 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "subsystem": "bdev", 00:24:01.733 "config": [ 00:24:01.733 { 00:24:01.733 "method": "bdev_set_options", 00:24:01.733 "params": { 00:24:01.733 "bdev_io_pool_size": 65535, 00:24:01.733 "bdev_io_cache_size": 256, 00:24:01.733 "bdev_auto_examine": true, 00:24:01.733 "iobuf_small_cache_size": 128, 00:24:01.733 "iobuf_large_cache_size": 16 00:24:01.733 } 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "method": "bdev_raid_set_options", 00:24:01.733 "params": { 00:24:01.733 "process_window_size_kb": 1024 00:24:01.733 } 00:24:01.733 }, 00:24:01.733 { 00:24:01.733 "method": "bdev_iscsi_set_options", 00:24:01.734 "params": { 00:24:01.734 "timeout_sec": 30 00:24:01.734 } 00:24:01.734 }, 00:24:01.734 { 00:24:01.734 "method": "bdev_nvme_set_options", 00:24:01.734 "params": { 00:24:01.734 "action_on_timeout": "none", 00:24:01.734 "timeout_us": 0, 00:24:01.734 "timeout_admin_us": 0, 00:24:01.734 "keep_alive_timeout_ms": 10000, 00:24:01.734 "arbitration_burst": 0, 00:24:01.734 "low_priority_weight": 0, 00:24:01.734 "medium_priority_weight": 0, 00:24:01.734 "high_priority_weight": 0, 00:24:01.734 "nvme_adminq_poll_period_us": 10000, 00:24:01.734 "nvme_ioq_poll_period_us": 0, 00:24:01.734 "io_queue_requests": 512, 00:24:01.734 "delay_cmd_submit": true, 00:24:01.734 "transport_retry_count": 4, 00:24:01.734 "bdev_retry_count": 3, 00:24:01.734 "transport_ack_timeout": 0, 00:24:01.734 "ctrlr_loss_timeout_sec": 0, 00:24:01.734 "reconnect_delay_sec": 0, 00:24:01.734 "fast_io_fail_timeout_sec": 0, 00:24:01.734 "disable_auto_failback": false, 00:24:01.734 "generate_uuids": false, 00:24:01.734 "transport_tos": 0, 00:24:01.734 "nvme_error_stat": false, 00:24:01.734 "rdma_srq_size": 0, 00:24:01.734 "io_path_stat": false, 00:24:01.734 "allow_accel_sequence": false, 00:24:01.734 "rdma_max_cq_size": 0, 00:24:01.734 "rdma_cm_event_timeout_ms": 0, 00:24:01.734 "dhchap_digests": [ 00:24:01.734 "sha256", 00:24:01.734 "sha384", 00:24:01.734 "sha512" 00:24:01.734 ], 00:24:01.734 "dhchap_dhgroups": [ 00:24:01.734 "null", 00:24:01.734 "ffdhe2048", 00:24:01.734 "ffdhe3072", 00:24:01.734 "ffdhe4096", 00:24:01.734 "ffdhe6144", 00:24:01.734 "ffdhe8192" 00:24:01.734 ] 00:24:01.734 } 00:24:01.734 }, 00:24:01.734 { 00:24:01.734 "method": "bdev_nvme_attach_controller", 00:24:01.734 "params": { 00:24:01.734 "name": "nvme0", 00:24:01.734 "trtype": "TCP", 00:24:01.734 "adrfam": "IPv4", 00:24:01.734 "traddr": "127.0.0.1", 00:24:01.734 "trsvcid": "4420", 00:24:01.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.734 "prchk_reftag": false, 00:24:01.734 "prchk_guard": false, 00:24:01.734 "ctrlr_loss_timeout_sec": 0, 00:24:01.734 "reconnect_delay_sec": 0, 00:24:01.734 "fast_io_fail_timeout_sec": 0, 00:24:01.734 "psk": "key0", 00:24:01.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.734 "hdgst": false, 00:24:01.734 "ddgst": false 00:24:01.734 } 00:24:01.734 }, 00:24:01.734 { 00:24:01.734 "method": "bdev_nvme_set_hotplug", 00:24:01.734 "params": { 00:24:01.734 "period_us": 100000, 00:24:01.734 "enable": false 00:24:01.734 } 00:24:01.734 }, 00:24:01.734 { 00:24:01.734 "method": "bdev_wait_for_examine" 00:24:01.734 } 00:24:01.734 ] 00:24:01.734 }, 00:24:01.734 { 00:24:01.734 "subsystem": "nbd", 00:24:01.734 "config": [] 00:24:01.734 } 00:24:01.734 ] 00:24:01.734 }' 00:24:01.734 16:20:02 -- keyring/file.sh@114 -- # killprocess 3506066 00:24:01.734 16:20:02 -- common/autotest_common.sh@936 -- # '[' -z 3506066 ']' 00:24:01.734 16:20:02 -- common/autotest_common.sh@940 -- # kill -0 3506066 00:24:01.734 16:20:02 -- common/autotest_common.sh@941 -- # uname 00:24:01.734 16:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.734 16:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3506066 00:24:01.734 16:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:01.734 16:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:01.734 16:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3506066' 00:24:01.734 killing process with pid 3506066 00:24:01.734 16:20:02 -- common/autotest_common.sh@955 -- # kill 3506066 00:24:01.734 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.734 00:24:01.734 Latency(us) 00:24:01.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.734 =================================================================================================================== 00:24:01.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.734 16:20:02 -- common/autotest_common.sh@960 -- # wait 3506066 00:24:01.992 16:20:03 -- keyring/file.sh@117 -- # bperfpid=3507511 00:24:01.992 16:20:03 -- keyring/file.sh@119 -- # waitforlisten 3507511 /var/tmp/bperf.sock 00:24:01.992 16:20:03 -- common/autotest_common.sh@817 -- # '[' -z 3507511 ']' 00:24:01.992 16:20:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.992 16:20:03 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:01.992 16:20:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:01.992 16:20:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.992 16:20:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:01.992 16:20:03 -- common/autotest_common.sh@10 -- # set +x 00:24:01.992 16:20:03 -- keyring/file.sh@115 -- # echo '{ 00:24:01.992 "subsystems": [ 00:24:01.992 { 00:24:01.992 "subsystem": "keyring", 00:24:01.992 "config": [ 00:24:01.992 { 00:24:01.992 "method": "keyring_file_add_key", 00:24:01.992 "params": { 00:24:01.992 "name": "key0", 00:24:01.992 "path": "/tmp/tmp.LH17XwPoBi" 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "keyring_file_add_key", 00:24:01.992 "params": { 00:24:01.992 "name": "key1", 00:24:01.992 "path": "/tmp/tmp.tO4nRTcFK6" 00:24:01.992 } 00:24:01.992 } 00:24:01.992 ] 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "subsystem": "iobuf", 00:24:01.992 "config": [ 00:24:01.992 { 00:24:01.992 "method": "iobuf_set_options", 00:24:01.992 "params": { 00:24:01.992 "small_pool_count": 8192, 00:24:01.992 "large_pool_count": 1024, 00:24:01.992 "small_bufsize": 8192, 00:24:01.992 "large_bufsize": 135168 00:24:01.992 } 00:24:01.992 } 00:24:01.992 ] 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "subsystem": "sock", 00:24:01.992 "config": [ 00:24:01.992 { 00:24:01.992 "method": "sock_impl_set_options", 00:24:01.992 "params": { 00:24:01.992 "impl_name": "posix", 00:24:01.992 "recv_buf_size": 2097152, 00:24:01.992 "send_buf_size": 2097152, 00:24:01.992 "enable_recv_pipe": true, 00:24:01.992 "enable_quickack": false, 00:24:01.992 "enable_placement_id": 0, 00:24:01.992 "enable_zerocopy_send_server": true, 00:24:01.992 "enable_zerocopy_send_client": false, 00:24:01.992 "zerocopy_threshold": 0, 00:24:01.992 "tls_version": 0, 00:24:01.992 "enable_ktls": false 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "sock_impl_set_options", 00:24:01.992 "params": { 00:24:01.992 "impl_name": "ssl", 00:24:01.992 "recv_buf_size": 4096, 00:24:01.992 "send_buf_size": 4096, 00:24:01.992 "enable_recv_pipe": true, 00:24:01.992 "enable_quickack": false, 00:24:01.992 "enable_placement_id": 0, 00:24:01.992 "enable_zerocopy_send_server": true, 00:24:01.992 "enable_zerocopy_send_client": false, 00:24:01.992 "zerocopy_threshold": 0, 00:24:01.992 "tls_version": 0, 00:24:01.992 "enable_ktls": false 00:24:01.992 } 00:24:01.992 } 00:24:01.992 ] 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "subsystem": "vmd", 00:24:01.992 "config": [] 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "subsystem": "accel", 00:24:01.992 "config": [ 00:24:01.992 { 00:24:01.992 "method": "accel_set_options", 00:24:01.992 "params": { 00:24:01.992 "small_cache_size": 128, 00:24:01.992 "large_cache_size": 16, 00:24:01.992 "task_count": 2048, 00:24:01.992 "sequence_count": 2048, 00:24:01.992 "buf_count": 2048 00:24:01.992 } 00:24:01.992 } 00:24:01.992 ] 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "subsystem": "bdev", 00:24:01.992 "config": [ 00:24:01.992 { 00:24:01.992 "method": "bdev_set_options", 00:24:01.992 "params": { 00:24:01.992 "bdev_io_pool_size": 65535, 00:24:01.992 "bdev_io_cache_size": 256, 00:24:01.992 "bdev_auto_examine": true, 00:24:01.992 "iobuf_small_cache_size": 128, 00:24:01.992 "iobuf_large_cache_size": 16 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "bdev_raid_set_options", 00:24:01.992 "params": { 00:24:01.992 "process_window_size_kb": 1024 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "bdev_iscsi_set_options", 00:24:01.992 "params": { 00:24:01.992 "timeout_sec": 30 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "bdev_nvme_set_options", 00:24:01.992 "params": { 00:24:01.992 "action_on_timeout": "none", 00:24:01.992 "timeout_us": 0, 00:24:01.992 "timeout_admin_us": 0, 00:24:01.992 "keep_alive_timeout_ms": 10000, 00:24:01.992 "arbitration_burst": 0, 00:24:01.992 "low_priority_weight": 0, 00:24:01.992 "medium_priority_weight": 0, 00:24:01.992 "high_priority_weight": 0, 00:24:01.992 "nvme_adminq_poll_period_us": 10000, 00:24:01.992 "nvme_ioq_poll_period_us": 0, 00:24:01.992 "io_queue_requests": 512, 00:24:01.992 "delay_cmd_submit": true, 00:24:01.992 "transport_retry_count": 4, 00:24:01.992 "bdev_retry_count": 3, 00:24:01.992 "transport_ack_timeout": 0, 00:24:01.992 "ctrlr_loss_timeout_sec": 0, 00:24:01.992 "reconnect_delay_sec": 0, 00:24:01.992 "fast_io_fail_timeout_sec": 0, 00:24:01.992 "disable_auto_failback": false, 00:24:01.992 "generate_uuids": false, 00:24:01.992 "transport_tos": 0, 00:24:01.992 "nvme_error_stat": false, 00:24:01.992 "rdma_srq_size": 0, 00:24:01.992 "io_path_stat": false, 00:24:01.992 "allow_accel_sequence": false, 00:24:01.992 "rdma_max_cq_size": 0, 00:24:01.992 "rdma_cm_event_timeout_ms": 0, 00:24:01.992 "dhchap_digests": [ 00:24:01.992 "sha256", 00:24:01.992 "sha384", 00:24:01.992 "sha512" 00:24:01.992 ], 00:24:01.992 "dhchap_dhgroups": [ 00:24:01.992 "null", 00:24:01.992 "ffdhe2048", 00:24:01.992 "ffdhe3072", 00:24:01.992 "ffdhe4096", 00:24:01.992 "ffdhe6144", 00:24:01.992 "ffdhe8192" 00:24:01.992 ] 00:24:01.992 } 00:24:01.992 }, 00:24:01.992 { 00:24:01.992 "method": "bdev_nvme_attach_controller", 00:24:01.992 "params": { 00:24:01.992 "name": "nvme0", 00:24:01.992 "trtype": "TCP", 00:24:01.992 "adrfam": "IPv4", 00:24:01.992 "traddr": "127.0.0.1", 00:24:01.992 "trsvcid": "4420", 00:24:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.992 "prchk_reftag": false, 00:24:01.992 "prchk_guard": false, 00:24:01.992 "ctrlr_loss_timeout_sec": 0, 00:24:01.992 "reconnect_delay_sec": 0, 00:24:01.992 "fast_io_fail_timeout_sec": 0, 00:24:01.992 "psk": "key0", 00:24:01.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.993 "hdgst": false, 00:24:01.993 "ddgst": false 00:24:01.993 } 00:24:01.993 }, 00:24:01.993 { 00:24:01.993 "method": "bdev_nvme_set_hotplug", 00:24:01.993 "params": { 00:24:01.993 "period_us": 100000, 00:24:01.993 "enable": false 00:24:01.993 } 00:24:01.993 }, 00:24:01.993 { 00:24:01.993 "method": "bdev_wait_for_examine" 00:24:01.993 } 00:24:01.993 ] 00:24:01.993 }, 00:24:01.993 { 00:24:01.993 "subsystem": "nbd", 00:24:01.993 "config": [] 00:24:01.993 } 00:24:01.993 ] 00:24:01.993 }' 00:24:01.993 [2024-04-24 16:20:03.165413] Starting SPDK v24.05-pre git sha1 77aac3af8 / DPDK 23.11.0 initialization... 00:24:01.993 [2024-04-24 16:20:03.165501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507511 ] 00:24:01.993 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.993 [2024-04-24 16:20:03.228596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.252 [2024-04-24 16:20:03.337259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.252 [2024-04-24 16:20:03.523085] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.187 16:20:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.187 16:20:04 -- common/autotest_common.sh@850 -- # return 0 00:24:03.187 16:20:04 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:03.187 16:20:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.187 16:20:04 -- keyring/file.sh@120 -- # jq length 00:24:03.187 16:20:04 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:03.187 16:20:04 -- keyring/file.sh@121 -- # get_refcnt key0 00:24:03.187 16:20:04 -- keyring/common.sh@12 -- # get_key key0 00:24:03.187 16:20:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.187 16:20:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.187 16:20:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.187 16:20:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:03.446 16:20:04 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:03.446 16:20:04 -- keyring/file.sh@122 -- # get_refcnt key1 00:24:03.446 16:20:04 -- keyring/common.sh@12 -- # get_key key1 00:24:03.446 16:20:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.446 16:20:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.446 16:20:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.446 16:20:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:03.705 16:20:04 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:03.705 16:20:04 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:03.705 16:20:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:03.705 16:20:04 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:03.964 16:20:05 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:03.964 16:20:05 -- keyring/file.sh@1 -- # cleanup 00:24:03.964 16:20:05 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LH17XwPoBi /tmp/tmp.tO4nRTcFK6 00:24:03.964 16:20:05 -- keyring/file.sh@20 -- # killprocess 3507511 00:24:03.964 16:20:05 -- common/autotest_common.sh@936 -- # '[' -z 3507511 ']' 00:24:03.964 16:20:05 -- common/autotest_common.sh@940 -- # kill -0 3507511 00:24:03.964 16:20:05 -- common/autotest_common.sh@941 -- # uname 00:24:03.964 16:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:03.964 16:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3507511 00:24:03.964 16:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:03.964 16:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:03.964 16:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3507511' 00:24:03.964 killing process with pid 3507511 00:24:03.964 16:20:05 -- common/autotest_common.sh@955 -- # kill 3507511 00:24:03.964 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.964 00:24:03.964 Latency(us) 00:24:03.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.964 =================================================================================================================== 00:24:03.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.964 16:20:05 -- common/autotest_common.sh@960 -- # wait 3507511 00:24:04.224 16:20:05 -- keyring/file.sh@21 -- # killprocess 3506027 00:24:04.224 16:20:05 -- common/autotest_common.sh@936 -- # '[' -z 3506027 ']' 00:24:04.224 16:20:05 -- common/autotest_common.sh@940 -- # kill -0 3506027 00:24:04.224 16:20:05 -- common/autotest_common.sh@941 -- # uname 00:24:04.224 16:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.224 16:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3506027 00:24:04.224 16:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:04.224 16:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:04.224 16:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3506027' 00:24:04.224 killing process with pid 3506027 00:24:04.224 16:20:05 -- common/autotest_common.sh@955 -- # kill 3506027 00:24:04.224 [2024-04-24 16:20:05.422640] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:04.224 16:20:05 -- common/autotest_common.sh@960 -- # wait 3506027 00:24:04.794 00:24:04.794 real 0m14.593s 00:24:04.794 user 0m35.398s 00:24:04.794 sys 0m3.365s 00:24:04.794 16:20:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:04.794 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:24:04.794 ************************************ 00:24:04.794 END TEST keyring_file 00:24:04.794 ************************************ 00:24:04.794 16:20:05 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:24:04.794 16:20:05 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:04.794 16:20:05 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:04.794 16:20:05 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:04.794 16:20:05 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:04.794 16:20:05 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:04.794 16:20:05 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:04.794 16:20:05 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:04.794 16:20:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:04.794 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:24:04.794 16:20:05 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:04.794 16:20:05 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:24:04.794 16:20:05 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:04.794 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.697 INFO: APP EXITING 00:24:06.697 INFO: killing all VMs 00:24:06.697 INFO: killing vhost app 00:24:06.697 INFO: EXIT DONE 00:24:07.631 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:24:07.631 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:24:07.631 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:24:07.631 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:24:07.631 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:24:07.631 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:24:07.631 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:24:07.631 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:24:07.631 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:24:07.631 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:24:07.631 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:24:07.631 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:24:07.631 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:24:07.631 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:24:07.631 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:24:07.631 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:24:07.631 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:24:09.007 Cleaning 00:24:09.007 Removing: /var/run/dpdk/spdk0/config 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:09.007 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:09.007 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:09.007 Removing: /var/run/dpdk/spdk1/config 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:09.007 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:09.007 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:09.007 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:09.007 Removing: /var/run/dpdk/spdk2/config 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:09.007 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:09.007 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:09.007 Removing: /var/run/dpdk/spdk3/config 00:24:09.007 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:09.007 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:09.008 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:09.008 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:09.008 Removing: /var/run/dpdk/spdk4/config 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:09.008 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:09.008 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:09.008 Removing: /dev/shm/bdev_svc_trace.1 00:24:09.008 Removing: /dev/shm/nvmf_trace.0 00:24:09.008 Removing: /dev/shm/spdk_tgt_trace.pid3281238 00:24:09.008 Removing: /var/run/dpdk/spdk0 00:24:09.008 Removing: /var/run/dpdk/spdk1 00:24:09.008 Removing: /var/run/dpdk/spdk2 00:24:09.008 Removing: /var/run/dpdk/spdk3 00:24:09.008 Removing: /var/run/dpdk/spdk4 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3278900 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3279768 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3281238 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3281730 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3282423 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3282563 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3283416 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3283432 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3283690 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3284890 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3285801 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3286109 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3286309 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3286529 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3286856 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3287023 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3287194 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3287498 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3287842 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3290201 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3290488 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3290668 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3290676 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3291108 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3291239 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3291561 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3291696 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3291989 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3292003 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3292172 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3292311 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3292811 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3292970 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3293186 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3293368 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3293516 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3293721 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3293885 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3294061 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3294333 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3294503 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3294685 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3294950 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3295118 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3295400 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3295565 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3295784 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3296011 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3296181 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3296461 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3296627 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3296869 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3297075 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3297242 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3297530 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3297695 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3297977 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3298057 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3298394 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3300487 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3326977 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3329597 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3335486 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3338785 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3341151 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3341559 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3349063 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3349065 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3350183 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3350773 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3351431 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3351825 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3351840 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3351977 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3352109 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3352115 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3352778 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3353426 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3353972 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3354370 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3354493 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3354631 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3355660 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3356388 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3361895 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3362057 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3364706 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3368510 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3370487 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3376887 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3382722 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3384026 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3384695 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3394798 00:24:09.008 Removing: /var/run/dpdk/spdk_pid3397030 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3399946 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3401123 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3402330 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3402468 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3402600 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3402734 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3403056 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3404376 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3405107 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3405421 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3407037 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3407589 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3408046 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3410550 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3417073 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3419712 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3423369 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3424448 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3425685 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3428249 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3430627 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3434854 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3434866 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3437757 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3437887 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3438030 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3438301 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3438310 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3440938 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3441396 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3443940 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3445804 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3449220 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3452579 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3457369 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3457371 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3469169 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3469707 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3470113 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3470650 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3471247 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3471668 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3472180 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3472594 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3475090 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3475238 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3479051 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3479220 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3480832 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3485989 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3486006 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3489318 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3490824 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3492239 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3492988 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3494396 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3495277 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3500688 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3500968 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3501355 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3502878 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3503191 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3503588 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3506027 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3506066 00:24:09.267 Removing: /var/run/dpdk/spdk_pid3507511 00:24:09.267 Clean 00:24:09.526 16:20:10 -- common/autotest_common.sh@1437 -- # return 0 00:24:09.526 16:20:10 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:24:09.526 16:20:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:09.526 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:24:09.526 16:20:10 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:24:09.526 16:20:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:09.526 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:24:09.526 16:20:10 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:09.526 16:20:10 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:24:09.526 16:20:10 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:24:09.526 16:20:10 -- spdk/autotest.sh@389 -- # hash lcov 00:24:09.526 16:20:10 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:09.526 16:20:10 -- spdk/autotest.sh@391 -- # hostname 00:24:09.526 16:20:10 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:24:09.526 geninfo: WARNING: invalid characters removed from testname! 00:24:48.271 16:20:43 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:48.271 16:20:47 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:49.210 16:20:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:51.814 16:20:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:55.112 16:20:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:57.653 16:20:58 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:00.945 16:21:01 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:00.945 16:21:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.945 16:21:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:00.945 16:21:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.945 16:21:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.945 16:21:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.945 16:21:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.945 16:21:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.945 16:21:01 -- paths/export.sh@5 -- $ export PATH 00:25:00.945 16:21:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.945 16:21:01 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:25:00.945 16:21:01 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:00.945 16:21:01 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713968461.XXXXXX 00:25:00.945 16:21:01 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713968461.p1naIH 00:25:00.945 16:21:01 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:00.945 16:21:01 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:25:00.945 16:21:01 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:25:00.945 16:21:01 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:00.945 16:21:01 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:00.945 16:21:01 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:00.945 16:21:01 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:00.945 16:21:01 -- common/autotest_common.sh@10 -- $ set +x 00:25:00.945 16:21:01 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:25:00.945 16:21:01 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:00.945 16:21:01 -- pm/common@17 -- $ local monitor 00:25:00.945 16:21:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.945 16:21:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3516098 00:25:00.945 16:21:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.945 16:21:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3516100 00:25:00.945 16:21:01 -- pm/common@21 -- $ date +%s 00:25:00.945 16:21:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.945 16:21:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3516102 00:25:00.945 16:21:01 -- pm/common@21 -- $ date +%s 00:25:00.945 16:21:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.945 16:21:01 -- pm/common@21 -- $ date +%s 00:25:00.945 16:21:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3516106 00:25:00.945 16:21:01 -- pm/common@26 -- $ sleep 1 00:25:00.945 16:21:01 -- pm/common@21 -- $ date +%s 00:25:00.945 16:21:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713968461 00:25:00.945 16:21:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713968461 00:25:00.945 16:21:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713968461 00:25:00.945 16:21:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713968461 00:25:00.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713968461_collect-vmstat.pm.log 00:25:00.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713968461_collect-bmc-pm.bmc.pm.log 00:25:00.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713968461_collect-cpu-load.pm.log 00:25:00.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713968461_collect-cpu-temp.pm.log 00:25:01.514 16:21:02 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:25:01.514 16:21:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:25:01.514 16:21:02 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:01.514 16:21:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:01.514 16:21:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:01.514 16:21:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:01.514 16:21:02 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:01.514 16:21:02 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:01.514 16:21:02 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:25:01.514 16:21:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:01.514 16:21:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:01.514 16:21:02 -- pm/common@30 -- $ signal_monitor_resources TERM 00:25:01.514 16:21:02 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:25:01.514 16:21:02 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.514 16:21:02 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:01.514 16:21:02 -- pm/common@45 -- $ pid=3516114 00:25:01.514 16:21:02 -- pm/common@52 -- $ sudo kill -TERM 3516114 00:25:01.514 16:21:02 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.514 16:21:02 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:01.514 16:21:02 -- pm/common@45 -- $ pid=3516112 00:25:01.514 16:21:02 -- pm/common@52 -- $ sudo kill -TERM 3516112 00:25:01.773 16:21:02 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.773 16:21:02 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:01.773 16:21:02 -- pm/common@45 -- $ pid=3516113 00:25:01.773 16:21:02 -- pm/common@52 -- $ sudo kill -TERM 3516113 00:25:01.773 16:21:02 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.773 16:21:02 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:01.773 16:21:02 -- pm/common@45 -- $ pid=3516115 00:25:01.773 16:21:02 -- pm/common@52 -- $ sudo kill -TERM 3516115 00:25:01.773 + [[ -n 3196660 ]] 00:25:01.773 + sudo kill 3196660 00:25:01.784 [Pipeline] } 00:25:01.802 [Pipeline] // stage 00:25:01.808 [Pipeline] } 00:25:01.824 [Pipeline] // timeout 00:25:01.830 [Pipeline] } 00:25:01.847 [Pipeline] // catchError 00:25:01.852 [Pipeline] } 00:25:01.868 [Pipeline] // wrap 00:25:01.872 [Pipeline] } 00:25:01.888 [Pipeline] // catchError 00:25:01.896 [Pipeline] stage 00:25:01.898 [Pipeline] { (Epilogue) 00:25:01.912 [Pipeline] catchError 00:25:01.913 [Pipeline] { 00:25:01.929 [Pipeline] echo 00:25:01.930 Cleanup processes 00:25:01.936 [Pipeline] sh 00:25:02.221 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:02.221 3516249 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:25:02.221 3516385 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:02.236 [Pipeline] sh 00:25:02.520 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:02.521 ++ grep -v 'sudo pgrep' 00:25:02.521 ++ awk '{print $1}' 00:25:02.521 + sudo kill -9 3516249 00:25:02.534 [Pipeline] sh 00:25:02.816 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:10.939 [Pipeline] sh 00:25:11.224 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:11.224 Artifacts sizes are good 00:25:11.240 [Pipeline] archiveArtifacts 00:25:11.248 Archiving artifacts 00:25:11.430 [Pipeline] sh 00:25:11.713 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:25:11.738 [Pipeline] cleanWs 00:25:11.786 [WS-CLEANUP] Deleting project workspace... 00:25:11.786 [WS-CLEANUP] Deferred wipeout is used... 00:25:11.793 [WS-CLEANUP] done 00:25:11.795 [Pipeline] } 00:25:11.817 [Pipeline] // catchError 00:25:11.830 [Pipeline] sh 00:25:12.118 + logger -p user.info -t JENKINS-CI 00:25:12.127 [Pipeline] } 00:25:12.143 [Pipeline] // stage 00:25:12.150 [Pipeline] } 00:25:12.167 [Pipeline] // node 00:25:12.173 [Pipeline] End of Pipeline 00:25:12.211 Finished: SUCCESS